# ４．交通量配分法 ４ 交通量配分手法 交通量配分手法(Traffic Assignment Method) ①OD交通需要関数

## Presentation on theme: "４．交通量配分法 ４ 交通量配分手法 交通量配分手法(Traffic Assignment Method) ①OD交通需要関数"— Presentation transcript:

４．交通量配分法 ４ 交通量配分手法 交通量配分手法(Traffic Assignment Method) ①OD交通需要関数
４　交通量配分手法 pp.55 交通量配分手法(Traffic Assignment Method) 　①OD交通需要関数 　②交通ネットワーク 　③リンクパフォーマンス関数 　　　　→　各リンク，経路交通量やサービス水準を求める手法 ＊Flow Independent配分　  　 ＊ Flow Dependent配分 ＊静的交通量配分　  　 ＊動的交通量配分 ＊確定的交通量配分　  　　 ＊確率的交通量配分 ＊利用者均衡配分　  　　 ＊システム最適配分

[1] 利用者均衡(user equilibrium)

[数理最適化問題4.1](1) s.t. and ラグランジェ関数 クーン・タッカー条件（必要条件）

[数理最適化問題4.1](2) 利用者均衡：おのおののODペアが利用者均衡状態にあるならば，
そのとき利用されている経路の所要時間は全て等しく，利用されていない経路の所要時間より小さいか，せいぜい等しい

[数理最適化問題4.2](1) 単一ODから複数ODへの拡張 ただし， リンクaの交通量 ODペアrsのk番目経路の交通量
ODペアrsのOD交通量 ODペアrsの経路行列の要素 ラグランジェ関数

[数理最適化問題4.2](2) クーン・タッカー条件（必要条件） ただし， は，ODペアrsのk番目経路の所要時間
if if 目的関数は，リンクパフォーマンス関数が単調増加ならば，リンク交通量に関して厳密な凸関数であり，解が一意的に求まる．しかし，経路交通量に関しては，厳密な凸関数ではなく，解が一意的に求まらない． 　リンク交通量が与えらても，それを満足する経路交通量は無数に存在，確定した値を求めることはできない．（演習問題4.4）

[2] システム最適配分法 ネットワーク利用者の総所要費用が最小となるような配分方法 総所要費用
pp.70 ネットワーク利用者の総所要費用が最小となるような配分方法 総所要費用 ：リンクａの交通量が微小に増加したときの総交通費用の増加分 ：限界費用関数(marginal cost function) リンクａの交通量が微小に増加したときの総交通費用の 増加する割合を表す関数 (4.8) 第一項：新たに加わった利用者１台当たりの平均費用 第二項：利用者が新たに加わることによる他の利用者の費用の増加

4.2 交通量配分手法 [3] All-or-Nothing 配分法 ：OD間の最短経路にOD交通量をすべて配分(all)し，
4.2　交通量配分手法 pp.72 [3] All-or-Nothing 配分法 ：OD間の最短経路にOD交通量をすべて配分(all)し， 　その他の経路には全く配分しない(notthing)方法．

2.2　最短経路探索法 ［ ５ ］ 　最短経路探索アルゴリズム

1）最短経路配分の簡単な例 発ノード①、 着ノード⑦⑧⑨ ｑ17=10、ｑ18=5 ｑ19=20 ｑrs： ＯＤ交通量
ｔij： リンクコスト Ｃrsp： 経路コスト ｆrsp： 経路交通量 Ｃrsp(ODペアｒｓのｐ番目の経路の交通費用) その経路上のリンクijのリンクコストｔijの和 　　　　　Ｃrsp 　=　Σｔijδrsij,p δrsij,p：リンクｉｊがＯＤペアｒｓのｐ番目の経路上にあれば１、な　 　ければ０となるリンク−経路結合行列（の要素） 　　ex.Ｃ172=ｔ12+ｔ25+ｔ57=4+3+2=9

リンク交通量ｘ :そのリンクを通過する経路の交通量を重ね合わせたもの ｘij =ΣΣｆrspδrsij,p rs ｐ
:そのリンクを通過する経路の交通量を重ね合わせたもの ｘij　=ΣΣｆrspδrsij,p 　　　　 rs　ｐ ｘ58　=ｆ181+ｆ182+ｆ194+ｆ195　= =25 最短経路の交通量を重ね合わせることによって、すべてのリンク交通量を計算すれば、最短経路配分の計算は完了。 ｘ12=35 ｘ13=0 ｘ24=0 ｘ25=35 ｘ35=0 ｘ36=0 ｘ47=0 ｘ57=10 ｘ58=25 ｘ68=0 ｘ79=0 ｘ89=20

（1）最短経路配分 すべての経路を列挙することは,実際のネットワークではまず無理。 ↓ 最短経路探索アルゴリズム
すべての経路を列挙することは,実際のネットワークではまず無理。 　　　　↓ 最短経路探索アルゴリズム a.ラベル確定法（Label Setting Method） 　各繰り返し計算において、１つ以上のラベルを確定していく。 　＜リンクコストが非負でなければならない＞

a.ラベル確定法（Label Setting Method）
永久ラベル(permanent label)　Ｐ 　一時ラベル(temporary label)　Ｔ　、起点ｒ step0:[初期値設定] ①永久ラベル、一時ラベル集合の初期設定 　起点ｒを永久ラベル、その他の点を一時ラベル 　 Ｐ←ｒ、Ｔ←Ｎ−{ｒ} ②ラベルの初期値設定 ｄ(ｊ)←0　 if　ｊ＝ｒ ｄ(ｊ)←ｔrj　 if　(r,j)∈Ａ ｄ(ｊ)←∞　 otherwise ③先行ポインタの初期値設定 pred(ｊ)←r if　(r,j)∈Ａ pred(ｊ)←0　 otherwise

a.ラベル確定法 step1:[ノード選択] Ｔからラベルの値が最小なノードｉをＰに移す ｄ(ｉ)＝min｛ｄ(ｊ)：j∈Ｔ｝
→　　Ｐ←Ｐ∪{ｉ} 、Ｔ←Ｔ−{ｉ} step2:[ラベル更新] リンクｉ→ｊ∈Ａ の終点ｊについて ｄ(ｊ)＞ｄ(ｉ)+ｔijならばｄ(ｊ)←ｄ(ｉ)+ｔij、pred(ｊ)←ｉ　 step3:[終了条件]　 Ｐ＝Ｎならば終了、 そうでなければstep1へ d(i):起点ｒからの最短距離 最短経路：ノードｉから先行ポインタを遡っていけばよい。

ｎ←1、ｘij←0　for all ij　 step1:[最短経路探索] ｎ番目のＯＤペアについて、最短経路を探索し、その最短 経路上のリンク番号を集合Ａｎを記録 step2:[ＯＤ交通量の配分] リンク(i,j)∈ＡｎのＯＤペア別交通量ｙijの値をＯＤペアｎの ＯＤ交通量とする。 ｙij←ｑrs　for (i,j)∈Ａｎ step3:[リンク交通量の合算] ｘij←ｘij+ｙij　　for (i,j)∈Ａｎ step4:[終了条件] n= Nならば終了、 そうでなければn=n+1とし、step1へ

【国家公務員試験（Ⅰ種）】 　Ａ～Ｄの４地域が図のような道路網で連絡されている。それぞれの地域の距離は図に示すとおりである。いま、ｍ−ｎ間を結ぶ新道を建設したい。新道での走行速度は現在の道路の２倍とすると、新道への転換交通量は次のうちどれか。ただし、すべての車は時間的に最短の経路を通るものとし、Ａ～Ｄそれぞれの地域間の日交通量は下のＯＤ表に示すとおりである。 　　①100　②230　③380　④420　⑤460 解説： ｍｎ区間は２倍の速度であるから10kmとして考える 新道によって転換される交通量は A⇔B、Ａ⇔Ｃのみ 2*（ ）=460

-------------------最適化問題--------------

2. Non Linear Programming *
（非線形計画問題） Both the object function(目的関数) and constraint functions(制約条件) are continuous(連続) and differentiable(微分可能). 2) The feasible region(実行可能領域) is convex(凸). The object function is strictly convex（狭義凸）. Today I will explain the Non Linear Programming. Do you remember this figure ? Please remember that the methods I will explain is very strong but, the applied area is limited.But this limitation does not reduce the importance of this method. The first is that Both object function and constraint functions are continuous and differentiable, and the second is that Feasible region is convex.and more Object function is strictly convex. These limitation guarantee the uniqueness of the solution. Uniqueness is very important issue when we search the optimized value, because if uniqueness is not guaranteed, we can not stop the search even if we find a good solution. There remains the possibility to find a better solution. Uniqueness allow us stop find other solution. A unique solution(唯一解)

Non Linear Programming
Number of Variables One 　　　　 Multi x2 We divide non linear programming into four group by two criterions: one criterion is the number of variables is one or not, and another criterions is the number of constraint function is 0 or not. In this class I will explain the four methods corresponding to these four groups. At firs I CR will explain the easiest case that Number of Variables is one and Number of Constraints is zero. 　Number of Constraints Some 　　 No x1 x1 x2 x1 x1 b1≦ x1 ≦ b2

2.1 Programs with One Variable
first-order condition It is well known from elementary calculus that the necessary condition or a differentiable function in one variable, z(x), to have a minimum at x = x* 's that the derivative of z(x) evaluated at x* equals zero. In other words, dz(x*)/dx=0 [2.3] This is a first-order condition for a minimum. If there is a minimum at x*, this condition must hold. x x = x*

Fig.2.1 ditonic: a function with a single minimum. (unimodal)
As demonstrated in this Figure, however, the first-order condition is not sufficient to ensure that x* is a minimum of z(x). In this figure dz(x)/dx = 0 for four values of x, namely x = a, x = b, x = c and x = d; all of which are stationary points of z(x). Note that x = a and x = d are local minima (i,e., they minimize the function in their immediate vicinity), while x = b is a point of infiection and x = c is a (local) maximum. To prove that a stationary point (such as x = a in Figure 2.1) is a minimum, two characteristics have to be demonstrated: (1) that it is a local minimum and not a local maximum or an infiection point, and (2) that it is a global minimum. In other words, the value of z(x) at x* is lower than z(x) at any other x. A function with a single minimum is called ditonic or unimodal.t A function is ditonic if it is strictly decreasing to the left of the minimum and strictly increasing to the right. Thus, if a function is ditonic, its stationary point is a global minimum. A suffcient condition for a statironary point to be a local minimum is the for the function to be strictly convex in the vicinity of the stationary point. Intuitively, strict convexity means that the function is "heavy in the middle," as illustrated in the vicinity of x = a and x = d. ditonic: a function with a single minimum. (unimodal)

Conditions for Strictly Convex Function (second-order conditions)
1) The line segment condition 2) The linear approximation condition 3) The second derivative condition * Formally, strict convexity means that a line segment connecting any two points of the function lies entirely above the function. In other words, a function is strictly convex if this inequality is satisfied, for any two distinct points xl and x2 and for any value of theta where 0 <theta < 1. These relationships are shown in this Figure for a strictly convex function.* Alternatively, strict convexity of differentiable functions can be determined by testing whether a linear approximation to the function always underestimates the function. Thus z(x) is strictly convex at xl if and only if this inequality CR is satisfied for any point x2 that is not equal to xl. This condition is demonstrated in this Figure *. Finally, if a function is twice differentiable in the vicinity of a stationary point, the strict convexity condition is equivalent to requiring that the second derivative of z(x) at x* be positive, that is, CR d2z(x*) > 0. Strict convexity can thus be defined by the line segment condition. CR by the linear approximation condition CR (or by the second derivative condition CR If a function is stnctly convex in the vicinity of a stationary point, this point is a local minimum. If a function includes more than one local minimum, it is usually difficult to demonstrate that a particular local minimum is also a global one. Doing so may require identification of all the local minima and a comparison of the objective function values at these points. Such a task can be very difficult and is sometimes impossible. If, however x* is the only local minimum of z(x), then, naturally, it is also a global minimum. To demonstrate that a local minimum is unique, it is sufficient to show that z(x) is convex for all values of x. CR Please remember that one of three conditions are satisfied, the function is strictly convex and it has only one local minimum and it guarantees the global minimum. * Strictly convex has only one local minimum 　　　and guarantees the global minimum.

Fig.2.2

Constrained Minimization Programs
binding constraint In constrained minimization it is possible to have a minimum where the first derivative does not vanish. As a result, the first-order conditions have to be generalized in order to include this possibility. CR Consider, for example, showing the function z(x) defined over the feasible region a’≦ x ≦a".CR In the first case, the minimum point is internal (inside the feasible region) and the condition dz(x*)/dx = 0 holds, CR while in Figure b CR the minimum point is on the boundary, at x = a'. Clearly, CR dz(a')/dx > 0. When the minimum is on the boundary of the feasible region, however, the following observation can be made: If the constraint on which the solution lies is on the left edge, then z(x) must be at x*, as is the case in Figure b. If, on the other hand, the binding constraint is on the right edge, then z(x) must be decreasing at x*, as in the case shown in Figure c. Please keep in mind that binding constraint is a constraint on which optimized valued is gotten. a’≦x x ≦ a”

Standard Form(標準形) min z(x) min z(x) subject to subject to
x ≧ a’　　 g1(x) ≧ b1 -x ≧ -a”　　 g2(x) ≧ b2 where g1(x) =x, b1= a’ g2(x)=-x, b2= -a” To characterize these situations mathematically in a statement of first-order conditions, the problem depicted in the former figure is first written in a standard form, as follows ; CR And these formulations are also generalized as follows; CR

Standard Form (b) >0 (c) >0 binding constraint(有効制約条件)
When the constraints are expressed in standard form, the aforementioned observation about the slope of z(x) at x* means that the derivative of z(x) at x* and the derivative of the binding constraint at this point [dg(x*)/dx] will always have the same sign. This observation can be used to extend the first-order conditions because it holds whenever a minimum occurs on the boundary of the feasible region as well as for an unconstrained solution. In the example given in Figure b, this condition can be verified since dgl(x*) /d x=1 and from the figure dz(x*)/dx > 0 whereas for Figure c dg2(x*) /d x=-1 and dz(x*)/dx < 0 Since both derivatives are scalars, the condition of same signs can be written as follows; CR CR where uj is a nonnegative scalar and gj(x) is the binding constraint. In order to develop a set of first-order conditions that would apply in all cases, define a nonnegative variable, uj , for each of the constraints (not only the binding ones). If the jth constraint is not binding CR [i.e., if gj(x*) > bj], let uj = 0. In other words, CR the condition [bj - gj(x*)]uj = 0 holds for all the constraints j Using this convention, this CR yellow colored conditions can be replaced by the condition on next slide. >0 binding constraint(有効制約条件) non-binding constraint

>0 >0 where uj is a nonnegative scalar and the sum includes all the constraints. Note that this equation generalizes the condition for an unconstrained mini-mum (see Eq. [2.3]). An unconstrained minimization problem can be looked upon as a constrained problem with an internal minimum point. In such a case, all uj = 0 (since none of the constraints is binding) and the condition above simply states that dz(x*)/dx =0. These conditions mean that the derivative of the function at the minimum and the derivative of the binding constraints at the mrmmum have the same signs and that the minimum point is (of course) feasible. This Equations depict the general form of the first-order conditions for the minimum of a single-variable function.

2.2 Multidimensional Programs(多変数)
subject to subject to In multidimensional problems, the objective function is given by z(x), where x = (xi, , xl) is a vector of length I (i.e., it is an array of variables xl, x2 , …. Up to xI). The constraints are given by gj(x) > bj, These equations can be simplified by using vector notation. Accordingly, let x denote the vector of decision variables, that is CR x=(x1,x2,…xI). Using such notation, the program above can be written as CR. This is the standard form. Note that all constraints are specified using a “greater than or equal ” sign. Constraints of the type of “less than or equal” can be converted to this standard form by multiplying them by –1. Equality constraints of the type gj(x)=bj can be expressed as the pair of constraints of gj(x)≧bj and gj(x)≦bj Again this drscussion of multivariable minimization programs deals separately wrth unconstramed and constrained problems, including a review of both first- and second-order constraitions in each case. As in the preceding section, the regularity conditions on z(x) and gj(x) are assumed to hold, and a solution therefore always exists.

Unconstrained Minimization Programs (制約条件なし，多変数)
First Order Condition(一次条件) If the function to be minimized is unconstrained, the first-order condition, for a minimum at x = x* is that the gradient of z(x) vanish at x*. The gradient of z(x) with respect to x, Vxz(x), is the vector of partial derivatives, that is, Vxz(x) = [2.8] The subscript x in the gradient notation is usually omitted if the variable with respect to which the partial derivatives are taken is obvious. At every point x, the gradient aims in the direction of the steepest increase in z(x) ; thus the gradient is the directional derivative of z(x) in the direction of the steepest ascent. As mentioned above, the first-order condition for a minimum is Vz(x*) = O [2.9a] meaning that each component of the gradient has to be equal to ~ zoro. In other words, [2.9b] This condition is entirely analogous to the condition dz(x*)/dx = 0 used in the single-variable case. It is only a necessary condition for a minimum; that is, it establishes the fact that z(x) has a stationary point at x*.

Example Z(x1,x2)=2x12+x22+2x1x2-4x1-2x2
Let’s find a stationary point of the function z of x1 and x2.

Conditions for Strictly Convex Functions (second-order conditions(２次条件))
1) The line segment condition 2) The linear approximation condition 3) The second derivative condition *graph To show that a stationary point, x*, is a local minimum, it is sufficient to demonstrate that z(x) is strictly convex in the vicinity of x = x*, as was the case with the single-dimensional function. The conditions for strict convexity in the multidimensional case parallel those given in the preceding section for single-variable functions. If a line segment lies entirely above the function, the strictly convexity conditon is as follows: CR where x' and x" are two different values of x, and sita is a scalar between zero and 1. Or if it is underestimated by a linear approximation, the function is also strictly convex. Thus the strictly convexity conditon is as follows; CR where the superscript T denotes the vector transposition operation. If gradient squared of z is Positive Definite, CR, the function is also strictly convex. Let’s recall the Conditions for Strictly Convex in one variable. CR. The above two conditions are similar between multi variables and a single variable.The difference is just between the vector and scalar. As the second derivative conditions are little different, I well explain it in detail. At first I will show what gradient squared of z is. H * : Positive Definite(正定値) Const

Hessian Gradient squared of z is a matrix and is called Hessian matrix. The element of ith row and jth column is the partial derivative of z with respect of xi of the the partial derivative of z with respect of xj.

Positive Definite(正定値)
1)A matrix H is positive definite if the product of hHhT is positive for any nonzero vector h. 2) A matrix H is positive definite if all leading principal minor determinants * are positive. There are two cases that matrix H is positive definite. 1)matrix H is positive definite, if the product of hHhT is positive for any nonzero vector h. 2) A matrix H is positive definite, if all leading principal minor determinant are positive. I will explain the leading principal minor determinant first.* I think an example help your comprehension, so I will show you. It is important to note that the aforementioned conditions for a matrix to be positive definite means that any diagonal matrix (i.e., a matrix in which only the elements along the diagonal are nonzero) with positive elements is positive definite. Many of the objective functions have diagonal Hessians and it is therefore only necessary to check the signs of the diagonal (ﾀﾞｲｱｶﾞﾅﾙ)elements to establish that such a Hessian is positive definite, with the implication that the objective function is strictly convex. Example *

Negative Definite(負定値)
1)A matrix H is positive definite, if the product of hHhT is positive for any nonzero vector h. 2) A matrix H is positive definite, If nth leading principal minor determinant are negative (n=2m+1) , and nth leading principal minor determinant are positive. (n=2m)

Leading principal minor determinants are a series of determinants as follows; first determinant is calculated form the matrix a11 element is extracted. CR The second determinant is calculated form the matrix from a11 to a22 element is extracted. CR We can calculate n Leading principal minor determinants when the matrix is n times n. :

Example of positive Definite
Z(x1,x2)=2x12+x22+2x1x2-4x1-2x2 1) This is an example of positive Definite. The function z of x1 and x2 is as follows; The partial derivative of z with respect to x1 is 4x1+2x2-4 and the partial derivative of z with respect to x2 is 2x2+2x1-2. Hessian is derived by these two equations. the product of hHhT according to any vector x1 and x2 except zero vector, becomes 4 x12+4x1x2+2x22 and it is greater than zero. Next let’s calculate the leading principal minor determinants. The first one is obviously 4 and it is positive. And the second one is also 4 and positive. From these calculations, gradient squared of z of x1 and x2 proved to be positive definite. 2)

Example of Negative Definite
Z(x1,x2)=-2x12-x22-2x1x2+4x1+2x2 Example of Negative Definite

Fig.2.6 This figure shows the shape of a strictly convex function in two dimensions. The figure also shows a line segment connecting two points on this function. As required by the strict convexity condition, this line lies entirely above the function.

Constrained Minimization Programs
Minimum Point Direction of decreasing objective function x2* x1* min z (x1, x2) s.t. gj (x, x2 ) ≧ bj j= 1 2, ..., 6 As in the single-variable case, the focus of the review in this section is on the first- and second-order conditions for a minimum. As in that case, the minimum of a constrained multidimensional program can occur on the boundary of the feasible region, where the condition z(x*) = 0 may not hold. CR Consider the two-dimensional program depicted in Figure 2.7a, where the feasible region is defined by six constraints and the objective function, z(x), is illustrated by the contour lines. The problem is CR min z (x1, x2) subject to gj (x, x2 ) > bj j= 1 2, ..., 6 It is apparent from the figure that the solution of this problem, CR x* = (x*1,x*2) lies at the intersection of constraints 2 and 3. These constraints are therefore binding at the minimum, or, in other words, CR g2(x*1,x*2) = b2 and g3(x*1,x*2) = b3. x* = (x*1,x*2) g2(x*1,x*2) = b2 g3(x*1,x*2) = b3

Minimum Point Direction of decreasing objective function x’=(x1’,x2’)
The direction perpendicular to any curve of the form g(x1, x2) = constant, at some point, x' = (x'1 x’2), is given by the gradient of the curve at that point, that is, by the vector gradient g of x’ CR The enlarged portion shown on this slide depicts the point (x*1,x*2) and the gradients of the two binding constraint functions at this point, Vg2(x*) and Vg3(x*). The definition of the first-order conditions for constrained problems rests on the observation that the gradient of the objective function at this point. z(x*), must lie between the gradients of the binding constraints. The gradient vector. z(x*), and its opposite vector. - z(x*), as well as - g2(x*) and - g3(x*), are all shown in Figure 2.7b, where the aforementioned condition can be intuitively verified. If, for example, - z(x*) were to point below - g3(x*), CR z(x) could be further decreased by sliding to the right, along g3(x). Similarly, if - z(x*) were to point above - g2(x*), z(x) could rely be decreased further by sliding upward along g2(x*). Only when Lz(x*) is between -Vg2(x*) and -Vg3(x*) can the objective function not be decreased further. This condition is a generalization of the same signs condition which was applicable in the single-variable case. gi(x1,x2)=const. x’=(x1’,x2’)

a linear combination of the gradients of the binding constraints.
z(x*) be expressed as a linear combination of the gradients of the binding constraints. With multidimensional functions, the constraints define some surfaces and the gradient of each of these constraints at a given point is a vector that is perpendicular to the surface at that point. The observation on which the first-order conditions are based for this case in that the gradients of the binding constraints create an imaginary "cone " within which the gradient of the objective function must lie. Thus the first-order conditions can be written as a specification of the direction of the gradient of z(x) at x* in relation to the directions of the binding constraints. In the multidimensional case, the requirement is that z(x*) be expressed as a linear combination of the gradients of the binding constraints. For the example in Figure 2.7, this condition is as follows; CR where u2 and u3 are nonnegative scalars associated with the second and third constraints, respectively. The same condition can be written in expanded notation as follows; CR In order to generalize this condition to problems of the type CR min(x) st. gj(x)>bj an auxiliary variable, uj , can be defined for each constraint. In a fashion analogous to the single-dimensional case, Iet uj be nonnegative if the jth constraint is binding at x*, and let uj = O for any nonbinding constraint. The first-order conditions can now be written as follows; CR Conditions on yellow board are known as the Kuhn-Tucker conditions, CR named after the two mathematicians who first proved that these are the conditions necessary for a constrained minimum. The auxiliary variables uj are known as dual variables as well as Lagrange multipliers. The condition CR uj[bj – gj(x*)] =0. V j e /, is known as the complementary slackness condition. Please remember that Kuhn-Tucker conditions is first order condition, so you have to check the second order condition in order to the guarantee the global minimum subject to and Complementary slackness conditions Kuhn-Tucker conditions

Example of Kuhn-Tucker conditions
Min. subject to Kuhn-Tucker conditions The Kuhn-Tucker conditions for this problem are as follows;

Example of Kuhn-Tucker conditions cont.
1. If u=0 x1*=0 x2*=1 x1* +x2*=1<2 2. If u≠0 To solve these equations, note that if u = 0, Eqs. [2.16a] and [2.16b] reduce to [2.12a] and [2.12b]. The solution of these equations is x = (0, 1), but it does not satisfy Consequently,CR u > 0, meaning that the solution satisfies the three equations . The solution of this system of equations is as follows; CR x1*=1 x2*=1 u*=2 Z(1,1)= -1

Ploblem Solve using Kuhn-Tucker conditions Max f(x1,x2)=3x1+x2
Subject to g1=x12+x22≦5 　　　　　　 g2=x1-x2≦1