4.交通量配分法 4 交通量配分手法 交通量配分手法(Traffic Assignment Method) ①OD交通需要関数

Slides:



Advertisements
Similar presentations
B 06 HIRAGANA NO.2 Not in the HIRAGANA chart? Looks a bit different? にん じゃ とう きょう にっさ ん.
Advertisements

だい六か – クリスマスとお正月 ぶんぽう. て form review ► Group 1 Verbs ► Have two or more ひらがな in the verb stem AND ► The final sound of the verb stem is from the い row.
て -form - Making て -form from ます -form -. With て -form, You can say... ~てもいいですか? (= May I do…) ~てください。 (= Please do…) ~ています。 (= am/is/are doing…) Connecting.
第 5 章 2 次元モデル Chapter 5 2-dimensional model. Contents 1.2 次元モデル 2-dimensional model 2. 弱形式 Weak form 3.FEM 近似 FEM approximation 4. まとめ Summary.
Essay writing rules for Japanese!!. * First ・ There are two directions you can write. ・よこがき / 横書き (same as we write English) ・たてがき / 縦書き (from right to.
VE 01 え form What is え form? え? You can do that many things with え form?
The Bar バー.
英語勉強会.
第1回レポートの課題 6月15日出題 今回の課題は1問のみ 第2回レポートと併せて本科目の単位を認定 第2回は7月に出題予定
Chapter 11 Queues 行列.
日本語... ジェパディー! This is a template for you to use in your classroom.
Verb てform + から、 After.
と.
前回の練習問題 6個の情報記号を,以下のように配置し,水平垂直パリティ検査符号を構成する. この符号の検査行列を求めよ
Chris Burgess (1号館1308研究室、内線164)
じょし Particles.
What did you do, mate? Plain-Past
Object Group ANalizer Graduate School of Information Science and Technology, Osaka University OGAN visualizes representative interactions between a pair.
日本人の英語文章の中で「ENJOY」はどういうふうに使われているのか
Noun の 間(に) + Adjective Verb てform + いる間(に) during/while.
項書換え系(3) 合流性 Term Rewriting Systems(3) Confluence 知能ソフトウェア特論
Model Checking (2) Temporal Logic
How do you talk about Positions/ Locations?
前回の練習問題について 情報記号 (x1, …, xk) に対し,検査記号 p = x1+…+xk+1として与えられる奇パリティ符号を考える.この符号が線形符号とならないことを証明せよ. 解答例: 線形符号とならない反例を示せばよい. x1=1, x2=x3=...=xk=0 ⇒ p = 0,対応する符号語は
にほんご JPN101 Oct. 26, 2009 (Monday).
十年生の 日本語 Year 10 Writing Portfolio
Reasonので + Consequence clause
The future tense Takuya Mochizuki.
交通需要予測と JICA STRADA 2008/1/29 (株)インテルテック研究所 吉田禎雄.
Licensing information
環境計画数理 佐野可寸志 オフィスアワー 木曜昼休み.
Chapter 4 Quiz #2 Verbs Particles を、に、で
The Sacred Deer of 奈良(なら)
Who Is Ready to Survive the Next Big Earthquake?
Did he/she just say that? Get your head out of the gutter! Oh wait….
“You Should Go To Kyoto”
VTA 02 What do you do on a weekend? しゅうまつ、何をしますか。
Nihongo Japanese 日本ご ‘Numbers ’ & ‘Hiragana Revision’
Kalman Filter Finite Element Method Applied to Dynamic Motion of Ground Yusuke KATO Department of Civil Engineering, Chuo University.
ストップウォッチの カード ストップウォッチの カード
P4-21 ネットワーク上の経路に対する 回帰問題について
て みる.
点素パス問題に対するアルゴリズム 小林 佑輔 東京大学 大学院情報理工学系研究科 組合せ最適化セミナー 2012 年 7月 13日
Causative Verbs Extensively borrowed from Rubin, J “Gone Fishin’”, Power Japanese (1992: Kodansha:Tokyo) Created by K McMahon.
-Get test signed and make corrections
Model Checking (2) Temporal Logic
Term paper, Report (1st, first)
Where is Wumpus Propositional logic (cont…) Reasoning where is wumpus
豊田正史(Masashi Toyoda) 福地健太郎(Kentarou Fukuchi)
Question Words….
クイズやゲーム形式で紹介した実例です。いずれも過去のインターン作です。
いくらですか?.
2019年4月8日星期一 I. EPL 84, (2008) 2019年4月8日星期一.
2019/4/22 Warm-up ※Warm-up 1~3には、小学校外国語活動「アルファベットを探そう」(H26年度、神埼小学校におけるSTの授業実践)で、5年生が撮影した写真を使用しています(授業者より使用許諾済)。
第1回レポートの課題 6月24日出題 今回の課題は1問のみ 第2回レポートと併せて本科目の単位を認定 第2回は7月に出題予定
Genetic Statistics Lectures (4) Evaluation of a region with SNPs
北大MMCセミナー 第62回 附属社会創造数学センター主催 Date: 2016年11月4日(金) 16:30~18:00
知能ソフトウェア特論 Intelligent Software
知能ソフトウェア特論 Intelligent Software
ー生命倫理の授業を通して生徒の意識に何が生じたかー
The difference between adjectives and adverbs
Created by L. Whittingham
東北大 情報科学 田中和之,吉池紀子 山口大 工 庄野逸 理化学研究所 岡田真人
Cluster EG Face To Face meeting
Grammar Point 2: Describing the locations of objects
Cluster EG Face To Face meeting 3rd
Machine Learning of User Attentions in Sensor Data Visualization
点素パス問題に対するアルゴリズム 小林 佑輔 東京大学 大学院情報理工学系研究科 組合せ最適化セミナー 2012 年 7月 13日
アノテーションガイドラインの管理を行う アノテーションシステムの提案
Improving Strategic Play in Shogi by Using Move Sequence Trees
Presentation transcript:

4.交通量配分法 4 交通量配分手法 交通量配分手法(Traffic Assignment Method) ①OD交通需要関数 4 交通量配分手法 pp.55 交通量配分手法(Traffic Assignment Method)  ①OD交通需要関数  ②交通ネットワーク  ③リンクパフォーマンス関数     → 各リンク,経路交通量やサービス水準を求める手法 *Flow Independent配分     * Flow Dependent配分 *静的交通量配分     *動的交通量配分 *確定的交通量配分      *確率的交通量配分 *利用者均衡配分      *システム最適配分

[1] 利用者均衡(user equilibrium) 仮説1) 利用者は常に最早経路を選択する. 仮説2) 利用者は利用経路に関する完全な情報を得ている. hi :経路iの交通量 ti :経路iの所要時間 t1=a1+b1h1 a1 > 0, b1 > 0 t2=a2+ b2h2 a2 > 0, b2 > 0 h1+h2 =q t1(q)>a2, t2(q)>a1  ⇒ 直線は交点を持つ

[数理最適化問題4.1](1) s.t. and ラグランジェ関数 クーン・タッカー条件(必要条件)

[数理最適化問題4.1](2) 利用者均衡:おのおののODペアが利用者均衡状態にあるならば, そのとき利用されている経路の所要時間は全て等しく,利用されていない経路の所要時間より小さいか,せいぜい等しい

[数理最適化問題4.2](1) 単一ODから複数ODへの拡張 ただし, リンクaの交通量 ODペアrsのk番目経路の交通量 ODペアrsのOD交通量 ODペアrsの経路行列の要素 ラグランジェ関数

[数理最適化問題4.2](2) クーン・タッカー条件(必要条件) ただし, は,ODペアrsのk番目経路の所要時間 if if 目的関数は,リンクパフォーマンス関数が単調増加ならば,リンク交通量に関して厳密な凸関数であり,解が一意的に求まる.しかし,経路交通量に関しては,厳密な凸関数ではなく,解が一意的に求まらない.  リンク交通量が与えらても,それを満足する経路交通量は無数に存在,確定した値を求めることはできない.(演習問題4.4)

問4.4 図4.12に与えられたリンク交通量を満足する経路交通量が無数に存在することを示せ.

[2] システム最適配分法 ネットワーク利用者の総所要費用が最小となるような配分方法 総所要費用 pp.70 ネットワーク利用者の総所要費用が最小となるような配分方法 総所要費用 :リンクaの交通量が微小に増加したときの総交通費用の増加分 :限界費用関数(marginal cost function) リンクaの交通量が微小に増加したときの総交通費用の 増加する割合を表す関数 (4.8) 第一項:新たに加わった利用者1台当たりの平均費用 第二項:利用者が新たに加わることによる他の利用者の費用の増加

限界費用曲線 平均費用関数は単調増加関数 そのリンクを走行する車両 すべての総走行費用 : [4.1] pp.68

利用者均衡v.s.システム最適配分 利用者均衡(UE)とシステム最適配分(SO)の配分結果は異なる 各自の行動に任せておくと社会全体の効率は最適にはならない 施策により利用者均衡状態からシステム最適状態へ移行させる ドライバー:平均費用を考慮して行動    ⇒利用者均衡状態 社会全体で増加する限界を考慮 ⇒システム最適状態 Load Pricing:混雑時に負荷料金を課す 混雑現象:すべてのドライバーが原因者

4.2 交通量配分手法 [3] All-or-Nothing 配分法 :OD間の最短経路にOD交通量をすべて配分(all)し, 4.2 交通量配分手法 pp.72 [3] All-or-Nothing 配分法 :OD間の最短経路にOD交通量をすべて配分(all)し,  その他の経路には全く配分しない(notthing)方法.

問4.6

問4.6 解答(1) t1=50+x1 t2=50+x2 t3=10x3 t4=10x4 t*1 +t*4 = t*2 +t*3 問4.6 解答(1) t1=50+x1 t2=50+x2 t3=10x3 t4=10x4 t*1 +t*4 = t*2 +t*3 50+x*1 + 10x*4 =50+x*2+10x*3 h*1 = x*1 = x*4 h*2 = x*2 = x*3 50+h*1 + 10 h*1 =50+ h*2 +10h*2 11 h*1 =11h*2 h*1 = h*2 q= h*1 + h*2 =6 h*1 = h*2 =3 x*1 = x*2 = x*3 = x*4=3 c1 = c2 = 83, TC=498

問4.6 解答(2) t1=50+x1 t2=50+x2 t3=10x3 t4=10x4 t5=10+x5 問4.6 解答(2) t1=50+x1 t2=50+x2 t3=10x3 t4=10x4 t5=10+x5 t*1 +t*4 = t*2 +t*3 = t*3 +t*5+t*4 50+x*1 + 10x*4 =50+x*2+10x*3 =10x*3 +10+x*5 +10x*4 h*1 = x*1 = x*4- h*3 h*2 = x*2 = x*3 - h*3 h*3 = x*5 50+h*1 + 10( h*1 +h*3) =50+ h*2 +10(h*2 +h*3) 50+ h*2 +10(h*2 +h*3) =10 (h*2 +h*3) +10+ h*3 +10 ( h*1 +h*3) h*1 = h*2= h*3 q= h*1 + h*2 + h*3 =6 h*1 = h*2 = h*2 =2 x*1 = 2, x*2 =2, x*3 =4, x*4=4, x*5=2 c1 = c2 =c3 =92, TC=552

問4.6 解答(3) 新たな道路建設により、 総交通費用が増加!! 総交通費用 498     552 ② ③ ①               ④ IC

2.2 最短経路探索法 [ 5 ]  最短経路探索アルゴリズム

1)最短経路配分の簡単な例 発ノード①、 着ノード⑦⑧⑨ q17=10、q18=5 q19=20 qrs: OD交通量 tij: リンクコスト Crsp: 経路コスト frsp: 経路交通量 Crsp(ODペアrsのp番目の経路の交通費用) その経路上のリンクijのリンクコストtijの和      Crsp  = Σtijδrsij,p δrsij,p:リンクijがODペアrsのp番目の経路上にあれば1、な   ければ0となるリンク−経路結合行列(の要素)   ex.C172=t12+t25+t57=4+3+2=9

各ODペアの全経路とその経路費用 ①②④⑦ C171=10 ①②④⑦⑨ C191=16 ①②⑤⑦ C172=9 ①②⑤⑦⑨ C192=15 ①③⑤⑦ C173=10 ①③⑤⑦⑨ C193=16 ①②⑤⑧ C181=10 ①②⑤⑧⑨ C194=13 ①③⑤⑧ C182=11 ①③⑤⑧⑨ C195=14 ①③⑥⑧ C183=12 ①②⑥⑧⑨ C196=15 経路交通量 frsp f172=q17=10, f171=f173=0 f181=q18=5 , f182=f183=0 f194=q19=20 , f171=f172=f173=f175=f176=0

リンク交通量x :そのリンクを通過する経路の交通量を重ね合わせたもの xij =ΣΣfrspδrsij,p rs p  :そのリンクを通過する経路の交通量を重ね合わせたもの xij =ΣΣfrspδrsij,p      rs p x58 =f181+f182+f194+f195 =5+0+20+0=25 最短経路の交通量を重ね合わせることによって、すべてのリンク交通量を計算すれば、最短経路配分の計算は完了。 x12=35 x13=0 x24=0 x25=35 x35=0 x36=0 x47=0 x57=10 x58=25 x68=0 x79=0 x89=20

(1)最短経路配分 すべての経路を列挙することは,実際のネットワークではまず無理。 ↓ 最短経路探索アルゴリズム  すべての経路を列挙することは,実際のネットワークではまず無理。     ↓ 最短経路探索アルゴリズム a.ラベル確定法(Label Setting Method)  各繰り返し計算において、1つ以上のラベルを確定していく。  <リンクコストが非負でなければならない>

a.ラベル確定法(Label Setting Method)  永久ラベル(permanent label) P  一時ラベル(temporary label) T 、起点r step0:[初期値設定] ①永久ラベル、一時ラベル集合の初期設定  起点rを永久ラベル、その他の点を一時ラベル   P←r、T←N−{r} ②ラベルの初期値設定 d(j)←0  if j=r d(j)←trj  if (r,j)∈A d(j)←∞  otherwise ③先行ポインタの初期値設定 pred(j)←r if (r,j)∈A pred(j)←0  otherwise

a.ラベル確定法 step1:[ノード選択] Tからラベルの値が最小なノードiをPに移す d(i)=min{d(j):j∈T}  →  P←P∪{i} 、T←T−{i} step2:[ラベル更新] リンクi→j∈A の終点jについて d(j)>d(i)+tijならばd(j)←d(i)+tij、pred(j)←i  step3:[終了条件]  P=Nならば終了、 そうでなければstep1へ d(i):起点rからの最短距離 最短経路:ノードiから先行ポインタを遡っていけばよい。

最短経路配分のアルゴリズム ODペア毎に、最短経路に配分する。 ODペアを通し番号で区別し、その総数はN。 step0:[初期値設定] n←1、xij←0 for all ij  step1:[最短経路探索] n番目のODペアについて、最短経路を探索し、その最短 経路上のリンク番号を集合Anを記録 step2:[OD交通量の配分] リンク(i,j)∈AnのODペア別交通量yijの値をODペアnの OD交通量とする。 yij←qrs for (i,j)∈An step3:[リンク交通量の合算] xij←xij+yij  for (i,j)∈An step4:[終了条件] n= Nならば終了、 そうでなければn=n+1とし、step1へ

【国家公務員試験(Ⅰ種)】  A~Dの4地域が図のような道路網で連絡されている。それぞれの地域の距離は図に示すとおりである。いま、m−n間を結ぶ新道を建設したい。新道での走行速度は現在の道路の2倍とすると、新道への転換交通量は次のうちどれか。ただし、すべての車は時間的に最短の経路を通るものとし、A~Dそれぞれの地域間の日交通量は下のOD表に示すとおりである。   ①100 ②230 ③380 ④420 ⑤460 解説: mn区間は2倍の速度であるから10kmとして考える 新道によって転換される交通量は A⇔B、A⇔Cのみ 2*(130+100)=460

解説: mn区間は2倍の速度であるから10kmとして考える 新道によって転換される交通量は A⇔B、A⇔Cのみ 2*(130+100)=460

-------------------最適化問題--------------

2. Non Linear Programming * (非線形計画問題) Both the object function(目的関数) and constraint functions(制約条件) are continuous(連続) and differentiable(微分可能). 2) The feasible region(実行可能領域) is convex(凸). The object function is strictly convex(狭義凸). Today I will explain the Non Linear Programming. Do you remember this figure ? Please remember that the methods I will explain is very strong but, the applied area is limited.But this limitation does not reduce the importance of this method. The first is that Both object function and constraint functions are continuous and differentiable, and the second is that Feasible region is convex.and more Object function is strictly convex. These limitation guarantee the uniqueness of the solution. Uniqueness is very important issue when we search the optimized value, because if uniqueness is not guaranteed, we can not stop the search even if we find a good solution. There remains the possibility to find a better solution. Uniqueness allow us stop find other solution. A unique solution(唯一解)

Non Linear Programming Number of Variables One      Multi x2 We divide non linear programming into four group by two criterions: one criterion is the number of variables is one or not, and another criterions is the number of constraint function is 0 or not. In this class I will explain the four methods corresponding to these four groups. At firs I CR will explain the easiest case that Number of Variables is one and Number of Constraints is zero.  Number of Constraints Some    No x1 x1 x2 x1 x1 b1≦ x1 ≦ b2

2.1 Programs with One Variable first-order condition It is well known from elementary calculus that the necessary condition or a differentiable function in one variable, z(x), to have a minimum at x = x* 's that the derivative of z(x) evaluated at x* equals zero. In other words, dz(x*)/dx=0 [2.3] This is a first-order condition for a minimum. If there is a minimum at x*, this condition must hold. x x = x*

Fig.2.1 ditonic: a function with a single minimum. (unimodal) As demonstrated in this Figure, however, the first-order condition is not sufficient to ensure that x* is a minimum of z(x). In this figure dz(x)/dx = 0 for four values of x, namely x = a, x = b, x = c and x = d; all of which are stationary points of z(x). Note that x = a and x = d are local minima (i,e., they minimize the function in their immediate vicinity), while x = b is a point of infiection and x = c is a (local) maximum. To prove that a stationary point (such as x = a in Figure 2.1) is a minimum, two characteristics have to be demonstrated: (1) that it is a local minimum and not a local maximum or an infiection point, and (2) that it is a global minimum. In other words, the value of z(x) at x* is lower than z(x) at any other x. A function with a single minimum is called ditonic or unimodal.t A function is ditonic if it is strictly decreasing to the left of the minimum and strictly increasing to the right. Thus, if a function is ditonic, its stationary point is a global minimum. A suffcient condition for a statironary point to be a local minimum is the for the function to be strictly convex in the vicinity of the stationary point. Intuitively, strict convexity means that the function is "heavy in the middle," as illustrated in the vicinity of x = a and x = d. ditonic: a function with a single minimum. (unimodal)

Conditions for Strictly Convex Function (second-order conditions) 1) The line segment condition 2) The linear approximation condition 3) The second derivative condition * Formally, strict convexity means that a line segment connecting any two points of the function lies entirely above the function. In other words, a function is strictly convex if this inequality is satisfied, for any two distinct points xl and x2 and for any value of theta where 0 <theta < 1. These relationships are shown in this Figure for a strictly convex function.* Alternatively, strict convexity of differentiable functions can be determined by testing whether a linear approximation to the function always underestimates the function. Thus z(x) is strictly convex at xl if and only if this inequality CR is satisfied for any point x2 that is not equal to xl. This condition is demonstrated in this Figure *. Finally, if a function is twice differentiable in the vicinity of a stationary point, the strict convexity condition is equivalent to requiring that the second derivative of z(x) at x* be positive, that is, CR d2z(x*) > 0. Strict convexity can thus be defined by the line segment condition. CR by the linear approximation condition CR (or by the second derivative condition CR If a function is stnctly convex in the vicinity of a stationary point, this point is a local minimum. If a function includes more than one local minimum, it is usually difficult to demonstrate that a particular local minimum is also a global one. Doing so may require identification of all the local minima and a comparison of the objective function values at these points. Such a task can be very difficult and is sometimes impossible. If, however x* is the only local minimum of z(x), then, naturally, it is also a global minimum. To demonstrate that a local minimum is unique, it is sufficient to show that z(x) is convex for all values of x. CR Please remember that one of three conditions are satisfied, the function is strictly convex and it has only one local minimum and it guarantees the global minimum. * Strictly convex has only one local minimum    and guarantees the global minimum.

Fig.2.2

Constrained Minimization Programs binding constraint In constrained minimization it is possible to have a minimum where the first derivative does not vanish. As a result, the first-order conditions have to be generalized in order to include this possibility. CR Consider, for example, showing the function z(x) defined over the feasible region a’≦ x ≦a".CR In the first case, the minimum point is internal (inside the feasible region) and the condition dz(x*)/dx = 0 holds, CR while in Figure b CR the minimum point is on the boundary, at x = a'. Clearly, CR dz(a')/dx > 0. When the minimum is on the boundary of the feasible region, however, the following observation can be made: If the constraint on which the solution lies is on the left edge, then z(x) must be at x*, as is the case in Figure b. If, on the other hand, the binding constraint is on the right edge, then z(x) must be decreasing at x*, as in the case shown in Figure c. Please keep in mind that binding constraint is a constraint on which optimized valued is gotten. a’≦x x ≦ a”

Standard Form(標準形) min z(x) min z(x) subject to subject to x ≧ a’   g1(x) ≧ b1 -x ≧ -a”   g2(x) ≧ b2 where g1(x) =x, b1= a’ g2(x)=-x, b2= -a” To characterize these situations mathematically in a statement of first-order conditions, the problem depicted in the former figure is first written in a standard form, as follows ; CR And these formulations are also generalized as follows; CR

Standard Form (b) >0 (c) >0 binding constraint(有効制約条件) When the constraints are expressed in standard form, the aforementioned observation about the slope of z(x) at x* means that the derivative of z(x) at x* and the derivative of the binding constraint at this point [dg(x*)/dx] will always have the same sign. This observation can be used to extend the first-order conditions because it holds whenever a minimum occurs on the boundary of the feasible region as well as for an unconstrained solution. In the example given in Figure b, this condition can be verified since dgl(x*) /d x=1 and from the figure dz(x*)/dx > 0 whereas for Figure c dg2(x*) /d x=-1 and dz(x*)/dx < 0 Since both derivatives are scalars, the condition of same signs can be written as follows; CR CR where uj is a nonnegative scalar and gj(x) is the binding constraint. In order to develop a set of first-order conditions that would apply in all cases, define a nonnegative variable, uj , for each of the constraints (not only the binding ones). If the jth constraint is not binding CR [i.e., if gj(x*) > bj], let uj = 0. In other words, CR the condition [bj - gj(x*)]uj = 0 holds for all the constraints j Using this convention, this CR yellow colored conditions can be replaced by the condition on next slide. >0 binding constraint(有効制約条件) non-binding constraint

>0 >0 where uj is a nonnegative scalar and the sum includes all the constraints. Note that this equation generalizes the condition for an unconstrained mini-mum (see Eq. [2.3]). An unconstrained minimization problem can be looked upon as a constrained problem with an internal minimum point. In such a case, all uj = 0 (since none of the constraints is binding) and the condition above simply states that dz(x*)/dx =0. These conditions mean that the derivative of the function at the minimum and the derivative of the binding constraints at the mrmmum have the same signs and that the minimum point is (of course) feasible. This Equations depict the general form of the first-order conditions for the minimum of a single-variable function.

2.2 Multidimensional Programs(多変数) subject to subject to In multidimensional problems, the objective function is given by z(x), where x = (xi, . . . , xl) is a vector of length I (i.e., it is an array of variables xl, x2 , …. Up to xI). The constraints are given by gj(x) > bj, These equations can be simplified by using vector notation. Accordingly, let x denote the vector of decision variables, that is CR x=(x1,x2,…xI). Using such notation, the program above can be written as CR. This is the standard form. Note that all constraints are specified using a “greater than or equal ” sign. Constraints of the type of “less than or equal” can be converted to this standard form by multiplying them by –1. Equality constraints of the type gj(x)=bj can be expressed as the pair of constraints of gj(x)≧bj and gj(x)≦bj Again this drscussion of multivariable minimization programs deals separately wrth unconstramed and constrained problems, including a review of both first- and second-order constraitions in each case. As in the preceding section, the regularity conditions on z(x) and gj(x) are assumed to hold, and a solution therefore always exists.

Unconstrained Minimization Programs (制約条件なし,多変数) First Order Condition(一次条件) If the function to be minimized is unconstrained, the first-order condition, for a minimum at x = x* is that the gradient of z(x) vanish at x*. The gradient of z(x) with respect to x, Vxz(x), is the vector of partial derivatives, that is, Vxz(x) = [2.8] The subscript x in the gradient notation is usually omitted if the variable with respect to which the partial derivatives are taken is obvious. At every point x, the gradient aims in the direction of the steepest increase in z(x) ; thus the gradient is the directional derivative of z(x) in the direction of the steepest ascent. As mentioned above, the first-order condition for a minimum is Vz(x*) = O [2.9a] meaning that each component of the gradient has to be equal to ~ zoro. In other words, [2.9b] This condition is entirely analogous to the condition dz(x*)/dx = 0 used in the single-variable case. It is only a necessary condition for a minimum; that is, it establishes the fact that z(x) has a stationary point at x*.

Example Z(x1,x2)=2x12+x22+2x1x2-4x1-2x2 Let’s find a stationary point of the function z of x1 and x2.

Conditions for Strictly Convex Functions (second-order conditions(2次条件)) 1) The line segment condition 2) The linear approximation condition 3) The second derivative condition *graph To show that a stationary point, x*, is a local minimum, it is sufficient to demonstrate that z(x) is strictly convex in the vicinity of x = x*, as was the case with the single-dimensional function. The conditions for strict convexity in the multidimensional case parallel those given in the preceding section for single-variable functions. If a line segment lies entirely above the function, the strictly convexity conditon is as follows: CR where x' and x" are two different values of x, and sita is a scalar between zero and 1. Or if it is underestimated by a linear approximation, the function is also strictly convex. Thus the strictly convexity conditon is as follows; CR where the superscript T denotes the vector transposition operation. If gradient squared of z is Positive Definite, CR, the function is also strictly convex. Let’s recall the Conditions for Strictly Convex in one variable. CR. The above two conditions are similar between multi variables and a single variable.The difference is just between the vector and scalar. As the second derivative conditions are little different, I well explain it in detail. At first I will show what gradient squared of z is. H * : Positive Definite(正定値) Const

Hessian Gradient squared of z is a matrix and is called Hessian matrix. The element of ith row and jth column is the partial derivative of z with respect of xi of the the partial derivative of z with respect of xj.

Positive Definite(正定値) 1)A matrix H is positive definite if the product of hHhT is positive for any nonzero vector h. 2) A matrix H is positive definite if all leading principal minor determinants * are positive. There are two cases that matrix H is positive definite. 1)matrix H is positive definite, if the product of hHhT is positive for any nonzero vector h. 2) A matrix H is positive definite, if all leading principal minor determinant are positive. I will explain the leading principal minor determinant first.* I think an example help your comprehension, so I will show you. It is important to note that the aforementioned conditions for a matrix to be positive definite means that any diagonal matrix (i.e., a matrix in which only the elements along the diagonal are nonzero) with positive elements is positive definite. Many of the objective functions have diagonal Hessians and it is therefore only necessary to check the signs of the diagonal (ダイアガナル)elements to establish that such a Hessian is positive definite, with the implication that the objective function is strictly convex. Example *

Negative Definite(負定値) 1)A matrix H is positive definite, if the product of hHhT is positive for any nonzero vector h. 2) A matrix H is positive definite, If nth leading principal minor determinant are negative (n=2m+1) , and nth leading principal minor determinant are positive. (n=2m)

Leading Principal Minor Determinant(小行列式) Leading principal minor determinants are a series of determinants as follows; first determinant is calculated form the matrix a11 element is extracted. CR The second determinant is calculated form the matrix from a11 to a22 element is extracted. CR We can calculate n Leading principal minor determinants when the matrix is n times n. :

Example of positive Definite Z(x1,x2)=2x12+x22+2x1x2-4x1-2x2 1) This is an example of positive Definite. The function z of x1 and x2 is as follows; The partial derivative of z with respect to x1 is 4x1+2x2-4 and the partial derivative of z with respect to x2 is 2x2+2x1-2. Hessian is derived by these two equations. the product of hHhT according to any vector x1 and x2 except zero vector, becomes 4 x12+4x1x2+2x22 and it is greater than zero. Next let’s calculate the leading principal minor determinants. The first one is obviously 4 and it is positive. And the second one is also 4 and positive. From these calculations, gradient squared of z of x1 and x2 proved to be positive definite. 2)

Example of Negative Definite Z(x1,x2)=-2x12-x22-2x1x2+4x1+2x2 Example of Negative Definite

Fig.2.6 This figure shows the shape of a strictly convex function in two dimensions. The figure also shows a line segment connecting two points on this function. As required by the strict convexity condition, this line lies entirely above the function.

Constrained Minimization Programs Minimum Point Direction of decreasing objective function x2* x1* min z (x1, x2) s.t. gj (x, x2 ) ≧ bj j= 1 2, ..., 6 As in the single-variable case, the focus of the review in this section is on the first- and second-order conditions for a minimum. As in that case, the minimum of a constrained multidimensional program can occur on the boundary of the feasible region, where the condition z(x*) = 0 may not hold. CR Consider the two-dimensional program depicted in Figure 2.7a, where the feasible region is defined by six constraints and the objective function, z(x), is illustrated by the contour lines. The problem is CR min z (x1, x2) subject to gj (x, x2 ) > bj j= 1 2, ..., 6 It is apparent from the figure that the solution of this problem, CR x* = (x*1,x*2) lies at the intersection of constraints 2 and 3. These constraints are therefore binding at the minimum, or, in other words, CR g2(x*1,x*2) = b2 and g3(x*1,x*2) = b3. x* = (x*1,x*2) g2(x*1,x*2) = b2 g3(x*1,x*2) = b3

Minimum Point Direction of decreasing objective function x’=(x1’,x2’) The direction perpendicular to any curve of the form g(x1, x2) = constant, at some point, x' = (x'1 x’2), is given by the gradient of the curve at that point, that is, by the vector gradient g of x’ CR The enlarged portion shown on this slide depicts the point (x*1,x*2) and the gradients of the two binding constraint functions at this point, Vg2(x*) and Vg3(x*). The definition of the first-order conditions for constrained problems rests on the observation that the gradient of the objective function at this point. z(x*), must lie between the gradients of the binding constraints. The gradient vector. z(x*), and its opposite vector. - z(x*), as well as - g2(x*) and - g3(x*), are all shown in Figure 2.7b, where the aforementioned condition can be intuitively verified. If, for example, - z(x*) were to point below - g3(x*), CR z(x) could be further decreased by sliding to the right, along g3(x). Similarly, if - z(x*) were to point above - g2(x*), z(x) could rely be decreased further by sliding upward along g2(x*). Only when Lz(x*) is between -Vg2(x*) and -Vg3(x*) can the objective function not be decreased further. This condition is a generalization of the same signs condition which was applicable in the single-variable case. gi(x1,x2)=const. x’=(x1’,x2’)

a linear combination of the gradients of the binding constraints. z(x*) be expressed as a linear combination of the gradients of the binding constraints. With multidimensional functions, the constraints define some surfaces and the gradient of each of these constraints at a given point is a vector that is perpendicular to the surface at that point. The observation on which the first-order conditions are based for this case in that the gradients of the binding constraints create an imaginary "cone " within which the gradient of the objective function must lie. Thus the first-order conditions can be written as a specification of the direction of the gradient of z(x) at x* in relation to the directions of the binding constraints. In the multidimensional case, the requirement is that z(x*) be expressed as a linear combination of the gradients of the binding constraints. For the example in Figure 2.7, this condition is as follows; CR where u2 and u3 are nonnegative scalars associated with the second and third constraints, respectively. The same condition can be written in expanded notation as follows; CR In order to generalize this condition to problems of the type CR min(x) st. gj(x)>bj an auxiliary variable, uj , can be defined for each constraint. In a fashion analogous to the single-dimensional case, Iet uj be nonnegative if the jth constraint is binding at x*, and let uj = O for any nonbinding constraint. The first-order conditions can now be written as follows; CR Conditions on yellow board are known as the Kuhn-Tucker conditions, CR named after the two mathematicians who first proved that these are the conditions necessary for a constrained minimum. The auxiliary variables uj are known as dual variables as well as Lagrange multipliers. The condition CR uj[bj – gj(x*)] =0. V j e /, is known as the complementary slackness condition. Please remember that Kuhn-Tucker conditions is first order condition, so you have to check the second order condition in order to the guarantee the global minimum subject to and Complementary slackness conditions Kuhn-Tucker conditions

Example of Kuhn-Tucker conditions Min. subject to Kuhn-Tucker conditions The Kuhn-Tucker conditions for this problem are as follows;

Example of Kuhn-Tucker conditions cont. 1. If u=0 x1*=0 x2*=1 x1* +x2*=1<2 2. If u≠0 To solve these equations, note that if u = 0, Eqs. [2.16a] and [2.16b] reduce to [2.12a] and [2.12b]. The solution of these equations is x = (0, 1), but it does not satisfy Consequently,CR u > 0, meaning that the solution satisfies the three equations . The solution of this system of equations is as follows; CR x1*=1 x2*=1 u*=2 Z(1,1)= -1

Ploblem Solve using Kuhn-Tucker conditions Max f(x1,x2)=3x1+x2   Subject to g1=x12+x22≦5        g2=x1-x2≦1