Составитель Борис Демешев
Скачать 1.58 Mb.
|
Infinity by continuity? 5.46. 𝑃 (𝐴|𝐵) = 𝑝 𝑘−1 · 1 + (1 − 𝑝 𝑘−1 )𝑃 (𝐴|𝐵 𝑐 ) 𝑃 (𝐴|𝐵 𝑐 ) = 𝑞 𝑟−1 · 0 + (1 − 𝑞 𝑟−1 )𝑃 (𝐴|𝐵) 𝑃 (𝐴) = 𝑝 · 𝑃 (𝐴|𝐵) + 𝑞 · 𝑃 (Число черных всегда нечетно. Единица. 5.48. Сделав первый узел мыс вероятностью получаем 1 узел и гарантированно снижаем число спагетти на единицу. Следовательно, 𝐸(𝑋) = ∑︀ 𝑛 𝑖=1 1 2𝑛−1 , примерно 2 ln б) вероятность того, что заданная спагетти завяжется сама с собой равна 2𝑛−1 , значит среднее равно б) 𝑓(𝑘, 𝑛) = 𝑘 𝑛 𝑓 (𝑘 − 1, 𝑛 − 1) + 𝑛−𝑘 𝑛 𝑓 (𝑘, 𝑛 − Ключ из первой разбитой копилки с вероятностью 𝑘 𝑛 не дает ничего нового, ас вероятностью 𝑛−𝑘 𝑛 «разбивает» еще одну копилку. Решение 𝑓(𝑘, 𝑛) В силу простоты ответа хочется какого-нибудь более красивого решения. Welcome! в) 𝑘(𝑛+1) 𝑘+1 г) 𝑘(𝑛+1) (𝑘+1)𝑛 . Красивое док-во - У системы 4 состояния (1-1-1, 1-2-0, 2-1-0, Пишем три уравнения на ожидаемые времена. Решая находим 𝐸(𝑇 ) Минимум при 𝑝 = Ответ слишком красивый. красивое решение 1: 𝑝 2 = 1 С вероятностью 𝑛−1 𝑛 сверху стопки лежит номер, меньший 𝑛, в этом случае можно считать, что 𝑛-ый номер вообще отсутствует в стопке ТВИМС-задачник. Демешев Борис. С вероятностью 1 𝑛 сверху стопки лежит 𝑛-ый номер, тогда обязательно происходит одно перекладывание, после которого мысленно выкинув 𝑛-ый номер можно считать, что имеется случайно упорядоченная стопка из (𝑛 − 1) выпуска+ Итого 𝑝 𝑛 = ∑︀ 𝑛 𝑖=2 1 𝑖 ≈ 𝑛 ln(𝑛) Solution Пусть 𝑞 𝑖 - вероятность того, что число 𝑖 уберут с верха стопки Вероятность того, что число 𝑖 уберут с верха стопки равна вероятности того, что среди чисел 1, 2 ,... 𝑖 число 𝑖 будет первым, те) = 𝐸(𝑋 2 ) + ... + 𝐸(𝑋 𝑛 ) = ∑︀ 𝑛 𝑖=2 1 𝑖 ≈ 𝑛 В каждой семье одна девочка! г) половина как только рождается девочка семья отказывается от дальнейших детей б) 𝑒 = 0.5 + 0.5(𝑒 + 1), 𝑒 = в) ∑︀ 0.5 𝑖 𝑖−1 𝑖 = ∑︀ 0.5 𝑖 − ∑︀ 0.5 𝑖 1 𝑖 = 1 + 𝑙𝑛(1/2) = 1 − е) в каждой семье одна девочка, поэтому 2 − 1 = 1 (см. пункт б) з) с вероятностью 0.5 эта величина не определена а) 𝐸( 𝑀 1 +...+𝑀 𝑛 𝑀 1 +...+𝑀 𝑛 +𝑛 ) = 𝐸( ¯ 𝑀 ¯ 𝑀 +1 ) , по ЗБЧ (плюс DCT, все дроби ограничены сверху) искомое ожидание стремится к 0.5 при 𝑛 → д) 𝐸( 𝑀 1 +...+𝑀 𝑛 𝑛 ) = 𝐸(𝑀 1 ) = ж) 𝐸( 𝑛 𝑀 1 +...+𝑀 𝑛 ) = 𝐸( 1 ¯ 𝑀 ) , по ЗБЧ и (uniform integrable) стремится к 1. С вероятностью 1/2 𝑛 - неопре- делена, как обойти (𝑋 = 𝑛) = 𝑃 (𝑥 1 + . . . + 𝑥 𝑛−1 ≤ 𝑥) − 𝑃 (𝑥 1 + . . . + 𝑥 𝑛 ≤ По индукции можем показать, что 𝑃 (𝑥 1 + . . . + 𝑥 𝑛 ≤ 𝑥) Поэтому 𝑃 (𝑋 = 𝑛) Далее - ряд Тейлора ) = 𝑒𝑥𝑝(𝑥) , 𝑉 𝑎𝑟(𝑥) = 𝑒𝑥𝑝(𝑥)(1 + 2𝑥) − 𝑒𝑥𝑝(2𝑥) Solution Пусть сейчас накоплено 𝑎 денег. Обозначим 𝑁(𝑎) - число оставшихся шагов. Введем две функции (𝑎) = 𝐸(𝑁 (и 𝑔(𝑎) = Методом первого шага можно получить уравнения (𝑎) = ∫︀ 𝑥 𝑎 𝑓 (𝑡)𝑑𝑡 𝑔(𝑎) = 1 + ∫︀ 𝑥 𝑎 𝑔(𝑡)𝑑𝑡 + 2 ∫︀ 𝑥 𝑎 𝑓 (И начальные условия 𝑓(𝑥) = 𝑔(𝑥) = Далее перейти к дифурам, решить их. Уравнения с интегралами можно пояснить дискретным случаем, 1, 10, используйте симметрию куба 5.64. игра справедливая Перевес в сумме в точности компенсируется последним слагаемым better one?). Вероятность зависит о числа гнилых яблок в ведре. Получаем четыре неизвестных. Они легко находятся начиная с 𝑝 1 5.66. 5.67. 𝑛 , 2𝑛, 𝑐𝑙𝑛(𝑛) 5.68. 5.69. 𝐸[𝑇 𝐴 ] = 10 𝐸[𝑇 𝐵 ] = 8 We wanna equate (1 + 𝑝 − 𝑝 3 )/(𝑝 − 𝑝 3 ) − 1 and (𝑝 2 − 3𝑝 + 3)/(1 − 𝑝) 2 + 1/𝑝 − 1 ТВИМС-задачник. Демешев Борис. roah@yandex.ru 128 It simplifies to 𝑝 3 − 2𝑝 2 − 𝑝 + 1 = 0 , 𝑝 ≈ 0.5550 5.70. 5.71. 2 5.72. 6 5.73. 3 5.74. 1/7 5.75. 6 5.76. 5.77. 5.78. 5.79. 5.80. 5.81. 5.82. 5.83. 5.84. 5.85. 5.86. 5.87. 5.88. 5.89. 5.90. 0.5 5.91. 0.5 5.92. 5.93. 5.94. 5.95. 5.96. 5.97. 5.98. 5.99. 1/2. One way of seeing this is as follows: first look at what happens after both have done 50 shots. Either A is ahead (with probability, say 𝑝), or B is ahead (which by symmetry, is also equal to p), or both are tied (with probability, say, 𝑞). So obviously, 2𝑝 + 𝑞 = 1. Now what are the ways A can win after 51 shots? He can win if he’s already ahead after 50 shots (which happens with probability 𝑝) and then the outcome of the 51st shot is irrelevant, or if he’s tied after 50 shots and has a success on the 51st shot (which occurs with probability 𝑞 · 1/2). (If A is behind B after 50, he either stays behind or at best ties at the 51st, and if A and B are tied at 50, then they still stay tied if A misses his 51st shot.) Thus, the probability of A winning is 𝑝 + 𝑞/2 = 1/2. If you know some advanced probability theory, then a simpler argument, called coupling, says that without loss of generality, you can assume that A and B have exactly the same hits and misses during their first 50 shots. Then A wins if and only if he succeeds on his 51st shot, which occurs with probability 1/2. Note that 50 isn’t special here; this works for any n and n+1. Another simplified version of this problem: If A flips 𝑛+1 and B flips 𝑛 fair coins, what that the probability that A gets more heads than B is 1/2. 5.100. 𝑃 𝑛 = 𝑃 𝑛−1 · 2𝑛−1 2𝑛+1 + 1 и 𝑃 𝑛 = 𝑛 2𝑛+1 5.101. 5.102. а) Уравнение методом первого шага, корень 𝑝 = √ 2 − б) Если 𝑈 нечетно, то шансы выжить равны нулю. Если 𝑈 четно, то шансы выжить равны +1 5.105. First step, 2 𝑡 −1 Может какое устное решение Ибо красивая формула? 5.106. б) 168 а) 𝑃 = 𝑝 + (1 − баб) Исходя из па, получаем 0 (можно воспользоваться ф. Стирлинга); 𝑃 (𝑆 𝑛 ≥ 1) = 𝑃 (𝑆 𝑛 ≤ −1) → ив (не существует; Введем величины 𝑃 (3|𝑆 𝑛 ) , 𝑦 𝑛 = 𝑃 (3|𝑆 𝑛 + и 𝑧 𝑛 = 𝑃 (3|𝑆 𝑛 + Тогда 𝑥 0 = 1 , 𝑦 0 = 0 , 𝑧 0 = 0 𝑥 𝑛 = 0.5𝑦 𝑛−1 + 0.5𝑧 𝑛−1 𝑦 𝑛 = 0.5𝑥 𝑛−1 + 0.5𝑧 𝑛−1 𝑧 𝑛 = 0.5𝑥 𝑛−1 + 0.5𝑦 𝑛−1 ТВИМС-задачник. Демешев Борис. Решая систему разностных уравнений (𝜆 1 = −0.5 , 𝜆 2 = −0.5 , 𝜆 3 = 1 , двум кратным корням соответствует два линейно независимых собственных вектора) находим, что 𝑥 𝑛 → 1/3 5.110. 5.111. 𝑥 18 = 0 𝑥 19 = 2/3𝑥 20 + 1/3𝑥 18 + 1 𝑥 20 = 0.1𝑥 19 + 0.9𝑥 21 + 1 𝑥 21 = 2/3𝑥 20 + 1/3𝑥 22 + 1 𝑥 22 − 𝑥 20 = 2(𝑥 21 − 𝑥 20 ) 𝑥 25 − 𝑥 20 = 5(𝑥 21 − Решая, получаем 𝑥 19 = 77 , 𝑥 20 = 114 , , 𝑥 21 = 117 , 𝑥 22 = 120 , 𝑥 25 = 129 5.112. 100 10000−200 5.113. 5.114. The answer appears to be log 2. A possible proof could go as follows: Let 𝑋 1 ,𝑋 2 ,. . . random variables with values in [0; 1] such that 𝑋 𝑛 is distributed uniformly on [0, 𝑋 𝑛−1 ] . We have to calculate 𝑝 := ∑︀ 𝑛 𝑃 (𝑋 𝑛 ≥ 1/2) Using the recursion 𝑃 (𝑋 𝑛+1 ≥ 𝑐) = ∫︀ 1−𝑐 0 𝑑𝑦𝑃 (𝑋 𝑛 ≥ 𝑐/(1 − 𝑦)) = ∫︀ 1 𝑐 𝑑𝑧 𝑐 𝑧 2 𝑃 (𝑋 𝑛 ≥ 𝑧) it is easy to prove inductively that 𝑃 (𝑋 𝑛 ≥ 𝑐) = 1 − 𝑐 ∑︀ 𝑘<𝑛 (− log 𝑐) 𝑘 𝑘! = 𝑐 ∑︀ 𝑘≥𝑛 (− log 𝑐) 𝑘 𝑘! and thus interchanging the sums over n and k we get 𝑝 = 𝑐(− log 𝑐) ∑︀ 𝑘≥0 (− log 𝑐) 𝑘 𝑘! = − log 𝑐 where 𝑐 = Вероятность выиграть, если ходить первым 𝑝 = 0.5 + 0.5 · 0.5 · 𝑝; 𝑝 = Пусть 𝑝 𝑛 - вероятность выиграть 𝑛-ую партию (для Аи. Процедуру голосования можно представить себе так: Положим напрямую красный шар. Затем по одному будем класть белые шары (миллион штук). Причем класть их будем равновероятно на любое место между уже положенными шарами, или с любого края полоски. В результате положение красного шара (отделяющего голоса заразных кандидатов) распределено равновероятно. Ответ: 0.2·10 6 10 6 +1 ≈ 0.2 а) разностное уравнение сну, решение 𝑝(𝑘) = 𝑘/𝑛, где 𝑘 - число красных шаров, 𝑛 - общее число шаров, может что попроще решение 2: 𝐾 𝑡 - число красных шаров - мартингал, если 𝑇 момент остановки, то 𝐸(𝐾 𝑇 ) = 𝑘 0 , значит 𝑃 (𝐾 𝑇 = 𝑛) · 𝑛 = 𝑘 0 вероятность не меняется после одного шага, значит она постоянна 𝑎 𝑎+𝑏 5.122. а) Рисуем граф (2 : 2) → (3 : 1), (3 : 1) → (4 : 0) с вероятностью 0.25 и (3 : 1) → (2 : 2) с вероятностью Чтобы получить один цвет за 𝑛 замен нужно раз пройти цикли дополнительно сделать шаги (2 : 2) → (3 : 1) → (4 : Отсюда 𝑝 𝑛 = (︀ 3 4 )︀ 𝑛 2 −1 1 геометрическое распределение для четных б) Также рисуем граф. Из состояние (4 : 2) за пару ходов можно либо вернуться в (4 : 2), либо перейти в (6 : 0). Далее аналогично E(k,n) denote the expectation of the number of rolls before the game ends. We are, of course, particularly interested in finding E(0,n). We roll at least once. Suppose number j comes up. If j is in the set 1, 2, . . . , k , then the game ends. This happens with a probability of k/n. If j is in the set k+1, K+2, . . . , n, then the game continues. The probability of j being any one of the numbers in the set k+1, K+2, . . . , n is 1/n. Imagine j is one of the numbers in the set k+1, K+2, . . . , n. Before you roll again, you ask yourself “starting ТВИМС-задачник. Демешев Борис. roah@yandex.ru 130 from this point in time, on average, how many times do I roll before the game ends?” This is basically the same question as E(k,n), with the difference that instead of k, we now have j. So, the answer is E(j,n). We can now express E(k,n) in terms of E(j,n)’s. Here’s how: E(k,n) = 1 + (k/n)*[The first roll j<=k] + sigma[(1/n)*E(j,n)] as j runs from k+1 through n. Of course [The first roll j<=k]tells us that in this event, there is no more rolls as the game has already ended. So, (*1): E(k,n) = 1 + (1/n)*sigma[E(j,n)] as j runs from k+1 through n. So, (*2): sigma[E(j,n)] as j runs from k+1 through n = n*E(k,n) – n. The preceding sum begins with j=k+1. So, writing a new sum in which j begins with k+2, we get: (*3): sigma[E(j,n)] as j runs from k+2 through n = n*E(k+1,n) – n. Now we will ‘break up’ (*1): E(k,n) = 1 + (1/n)*(E(k+1,n) + sigma[E(j,n)] as j runs from k+2 through n), which in conjunction with (*3) becomes: E(k,n) = 1 + (1/n)*( E(k+1,n) + (n*E(k+1,n) – n)), which, when simplified, becomes: E(k,n) = ((n+1)/n)*E(k+1,n). Or equivalently, (*4): 𝐸(𝑘, 𝑛) = (𝑛/(𝑛 + 1)) * 𝐸(𝑘 − 1, 𝑛). Iterating (*4), gives us: (*5): 𝐸(𝑘, 𝑛) = (𝑛/(𝑛 + 1)) 𝑘 * 𝐸(0, 𝑛) Now, recall that k can be any number in the set 0, 1, 2, . . . , n. So, for k=n, we do know already that E(n,n) =1. So, for k=n, (*5) gives us: 1 = 𝐸(𝑛, 𝑛) = (𝑛/(𝑛 + 1)) 𝑛 * 𝐸(0, 𝑛) . From which we get 𝐸(0, 𝑛) = (1 + 1/𝑛) 𝑛 5.125. 5.126. 12 Пусть 𝑝 𝑖 - вероятность того, что в очередном месте (на работе или дома) окажется 𝑖 зонтиков. Получаем соотношения 0.2𝑝 𝑛 (1) 𝑝 𝑖 = 0.2𝑝 𝑛−𝑖 + 0.8𝑝(𝑛 − 𝑖 + 1) , 0 < 𝑖 < 𝑛 (2) 𝑝 𝑛 = 𝑝 0 + Также, конечно, ∑︀ 𝑝 𝑖 = 1 . (Рассмотрим систему из (𝑛 − 1) уравнения типа (2). Если немного пофантизировать, то можно заметить, что они равновильны системе 𝑝 1 = 𝑝 2 = ... = Пользуясь (3) и (4) получаем 𝑝 𝑛 (0.2 + 𝑛) = Значит 𝑝 0 = 1 𝑛+0.2 . Нас интересует 0.8𝑝 0 5.128. 5.129. Let 𝑃 be the conditional probability that 𝐴 eventually wins, given that they are tied. (Hence the overall probability of 𝐴 winning starting from the beginning will be 𝑃 .) Let 𝑃 1 be the probability that 𝐴 wins, given that 𝐴 has the advantage, and let 𝑃 2 be the probability that 𝐴 wins, given that B has the advantage. From the tied state, we move to the state of 𝐴 having the advantage with probability 𝑝 and to the state of 𝐵 having the advantage with probability 1 − 𝑝. That gives us 𝑃 = 𝑝𝑃 1 + (1 − 𝑝)𝑃 2 Similar consideration of what happens from the advantage-A state and the advantage-B state gives rise to two more equations: 𝑃 1 = 𝑝 + (1 − 𝑝)𝑃 𝑃 2 = 𝑝𝑃 + (1 − 𝑝) · 0 In matrix form, we have this system: ⎡ ⎣ 1 −𝑝 𝑝 − 1 0 𝑝 − 1 1 0 𝑝 −𝑝 0 1 0 ⎤ ⎦ Solving it (off stage), we get that 𝑃 = 𝑝 2 1−2𝑝+2𝑝 2 , 𝑃 1 = 𝑝−𝑝 2 +𝑝 3 1−2𝑝+2𝑝 2 and 𝑃 2 = 𝑝 3 1−2𝑝+2𝑝 2 𝑃 is the answer we want. We say that A wins the match with probability 𝑝 2 1−2𝑝+2𝑝 2 ТВИМС-задачник. Демешев Борис. roah@yandex.ru 131 If we put the values 𝑝 = 0,𝑝 = 1,𝑝 = 1 2 into this expression, we get the results we should expect. A graph of this function will be sigmoidal, steepest in the middle, and approximately doubling the advantage in a neighborhood of the middle. That is, if p is 51%, then the probability of A winning the match will be nearly 52%. We can solve for the expected time in the same way. Let 𝑇 be the conditional expectation of the remaining time, given a tied state, and let 𝑇 1 and 𝑇 2 be the expected remaining time from the states of advantage-A and advantage-B, respectively. We get the following system of equations: 𝑇 = 1 + 𝑝𝑇 1 + (1 − 𝑝)𝑇 2 𝑇 1 = 𝑝 + (1 − 𝑝)(1 + 𝑇 ) 𝑇 2 = 𝑝(1 + 𝑇 ) + (1 − 𝑝) That gives us 𝑇 = 2 1−2𝑝+2𝑝 2 and also 𝑇 1 = 3−4𝑝+2𝑝 2 1−2𝑝+2𝑝 2 and 𝑇 2 = 1+2𝑝 2 1−2𝑝+2𝑝 2 𝑇 is the answer we want. Starting at the beginning, the expected length of the match is 2 1−2𝑝+2𝑝 2 games. This has a minimum of 2 games when 𝑝 = 0 or 𝑝 = 1 (which makes sense) and a maximum value of 4 games when 𝑝 = 1 2 5.130. 6.1. 6.2. 𝑃 (𝑁 𝑘 > 𝑗 + 1) = 𝑃 (𝑁 𝑘 > 𝑗 + 1 ∩ 𝑁 𝑘 > 𝑗) = 𝑃 (𝑁 𝑘 > 𝑗 + 1|𝑁 𝑘 > 𝑗)𝑃 (𝑁 𝑘 > 𝑗) 𝐸(𝑁 𝑘 ) = 𝑃 (𝑁 𝑘 > 0) + 𝑃 (𝑁 𝑘 > 1) + 𝑃 (𝑁 𝑘 > 2) + ... 𝐸(𝑁 𝑘 ) = 1 2𝑘 + 1 2𝑘 (︀1 − 1 2𝑘 )︀ + 1 2𝑘 (︀1 − 1 2𝑘 )︀ 2 + ... = 1 e) 𝐸(𝑁 𝑘 ) = (𝑝/𝑞) 𝑘 6.3. 𝑃 (частица никогда не посетит вершину 𝑘}) = 0 𝐴 𝑘 = частица закончит свой маршрут в 𝑘} 𝐵 𝑘 = частица когда-нибудь посетит одно из чисел k-1,k,k+1} 𝑃 (𝐴 𝑘 ) = 𝑃 (𝐴 𝑘 ∩ 𝐵 𝑘 ) = 𝑃 (𝐵 𝑘 ) · 𝑃 (𝐴 𝑘 |𝐵 𝑘 ) = 1 · 𝑃 (А вероятность 𝑃 (𝐴 𝑘 |𝐵 𝑘 ) - это вероятность того, что симметричное случайное блуждание поднимется на 𝑚 − 1 раньше, чем опустится до −1. Значит 𝑃 (𝐴 𝑘 |𝐵 𝑘 ) = 1 𝑚 6.4. 𝑃 (𝜏 = 2𝑘 − 1) = 𝑃 (𝑆 2𝑘−1 = 1 ∩ {𝑆 𝑡 < 1|𝑡 < 2𝑘 − Рассмотрим случай 2𝑘 − 1 ≥ 3. Тогда предыдущие два шага частица сделала вверх, что происходит с вероятностью 2 −2 𝑃 (𝑆 2𝑘−1 = 1 ∩ {𝑆 𝑡 < 1|𝑡 < 2𝑘 − 1}) = 2 −2 · 𝑃 (𝑆 2𝑘−3 = −1 ∩ {𝑆 𝑡 < 1|𝑡 < 2𝑘 − Требуется вероятность закончить в точке −1, не касаясь точки 1. Траекторий, касающихся 1, и заканчивающихся в −1 ровно столько сколько траекторий, заканчивающихся в 3. Следовательно (𝜏 = 2𝑘 − 1) = 2 −2 (𝑃 (𝑆 2𝑘−3 = −1) − 𝑃 (𝑆 2𝑘−3 = 3)) = 2 −(2𝑘−1) (︀𝐶 𝑘−1 2𝑘−3 − После арифметических преобразований получаем (𝜏 = 2𝑘 − 1) = 2 −2𝑘 1 Эта формула верна и для 2𝑘 − 1 < 3 7.1. 7.2. 7.3. 7.4. 7.5. 𝑎 = 𝑏 = 2004·1003 𝐶 3 2006 б) 1/4 7.7. a) 3/8 в-а) Зафиксируем координату посадки первого корабля. Обозначим центральный угол между первыми вторым кораблем 𝛼. Функция плотности имеет вид 𝑝(𝛼) = 2𝜋 Итог ∫︀ 𝜋/2 0 𝑝(𝛼) 𝛼+𝜋 2𝜋 𝑑𝛼 + ∫︀ 𝜋 𝜋/2 𝑝(𝛼) 𝜋−𝛼 2𝜋 𝑑𝛼 = 𝜋+2 4𝜋 7.8. The p.d.f of distance is 𝑠𝑖𝑛(𝑥/𝑅). take the mode of this function you get 𝜋/2 The second point chosen lies on a circle with the first point. The maximum distance between the two is one-half the circumference of this circle, i.e. 𝜋 · 𝑅, and assuming unit radius, the maximum distance is 𝜋. Half of the second points chosen will lie at a distance longer than 𝜋/2 and half will lie at a distance less than 𝜋/2. Thus, the expected distance is Точки 𝐴𝐵𝐶 должны идти подряд. Есть 6 способов выбрать три точки подряди выбрать три ТВИМС-задачник. Демешев Борис. точки без ограничений. Итого 6 = 0.3 7.10. 𝐸(𝑋 𝑛 ) = 2 Одну точку зафиксируем. Вместо двух других точек выберем две оси. На двух осях есть варианта выбора точек Рисуем множество на плоскости, ищем площадь. Solution3: Пишем интегралы of Это один и тот же вопрос of a: это b для случая 𝑡 = 1/2 Solution b1: 𝑝(𝑛, 𝑡) = 𝑛 · 𝑡 𝑛−1 , Для левой точки имеется 𝑛 вариантов, при заданной левой точке, остальные точки (их (𝑛 − 1)) должны попасть в отрезок длины 2𝜋 · 𝑡 при длине окружности 2𝜋. Solution Имеется 𝑛(𝑛 − 1) способов выбрать левую и правую границу. Оставшиеся (𝑛 − 2) точки должны попасть между ними. Получаем интеграл (перебираем все расстояния между границами − 1)𝑎 𝑛−2 𝑑𝑎 = 𝑛 · 𝑡 𝑛−1 Solution v: Исходя из задачи про вероятность для 𝑛 точек лежать внутри дуги, получаем сумму) = 𝑃 (𝑋 ≥ 1) + 𝑃 (𝑋 ≥ 2) + 𝑃 (𝑋 ≥ 3) + ... = 1 + 𝑡 0 + 2𝑡 1 + 3𝑡 2 + ... = 1 + 1 (1−𝑡) 2 7.16. 7.17. 𝑛+1 4𝑛−2 Одну точку зафиксируем. Вместо трех других точек выберем три оси. На трех осях есть вариантов выбора точек 8 7.19. Solution Во-первых. Вместо выбора 𝑛 точек наугад будем выбирать 𝑛 осей (𝑛 пар точек) наугад, а затем на каждой оси выбирать наугад одну из двух точек. Каждой оси соответствует один экватор. Во-вторых. Имеющиеся 𝑛 экваторов разрезают яблоко на 𝑛 2 − 𝑛 + части. Доказываем по индукции: каждый новый экватор пересекает имеющиеся 𝑘 экваторов 2𝑘 раз (каждый по два раза, каждое пересечение дает одну новую часть. В-третьих. На 𝑛 осях имеется разных комбинаций выбора конкретных 𝑛 точек. В-четвертых. Рассмотрим произвольную часть из 𝑛 2 − 𝑛 + имеющихся. На этой части выберем произвольную точку. Проведем ось через эту точку. Разрежем яблоко по соответствующему экватору. Выберем ту половину, в которой лежит выбранная точка. В этой половине лежат 𝑛 точек из заданных пар. Любая точка из той же части приведет к отрезанию тех же 𝑛 точек. Итого: 𝑝 = 𝑛 2 −𝑛+2 2 𝑛 7.20. 8 2𝜋 , не перепроверял 7.21. 7.22. нет, например, 𝑥 = √ 𝑟𝑐𝑜𝑠(𝜃) , 𝑦 Графически легко получить сумму бесконечного ряда. Она равна Графически на квадрате получаем две суммы геометрической прогрессии. 𝑝 = 1 3 ТВИМС-задачник. Демешев Борис. Те. единица более вероятна чем другие числа. Это как-то связано с распределением Бенфорда. 7.28. 7.29. 0.6 7.30. (𝑛 = 3): suppose the length of two points is x from one end and y from anohther end. The length of middle portion would be 1-(x+y), You can plot these x, and y on cartesian axes with x and y < 1 and following set of constraints: x+y>1-(x+y) x+1-(x+y)>y y+1-(x+y).>x when you plot these three constraints on the cartesian system then you will see that the feasible area for traingle formation is 1/8 while the feasible area to break a unit length into three parts is 1/2. Hence Probablity is =1/4 Можно через интеграл, можно геометрически посчитать объем, получаем Ответы а) равномерно б) да, рассмотрим угольник, над вписанной окружностью построим полусферу, на ней равномерно выбираем точку, в качестве случайных величин берем высоты 3 𝑐𝑜𝑠 2 (𝛼)+2𝑐𝑜𝑠(𝛼) (1+𝑐𝑜𝑠(𝛼)) 2 , 𝛼 = 𝜋 𝑛 7.37. 2/5 Многоугольник нельзя сложить, когда существует обломок длиннее суммы всех остальных обломков. Или, равносильно, если существует обломок длины больше чем вполовину сосульки. Обозначим буквой 𝜇 (гипер)-объем множества 𝐴 = {𝑥 1 + 𝑥 2 + ... + 𝑥 𝑛 = 1} . Тогда объем множества ∩ 𝑥 𝑖 > равен в силу того, что множества 𝐴 и 𝐴 ∩ 𝑥 𝑖 > подобны с коэффициентом. Искомая вероятность равна 𝑝 = 1 − 𝑛 0.5 𝑛−1 𝜇 𝜇 = 1 − 𝑛 2 𝑛−1 8.1. 8.2. 8.3. 8.4. 8.5. 8.6. 8.7. 8.8. 8.9. 8.10. 8.11. 8.12. 𝑎 𝑎+𝑏 , этот ответ можно получить взяв двойной интеграл или рассмотрев малое приращение ∆𝑡 для двух независимых пуассоновских потоков экспоненциально с параметром 𝑎 + 𝑏, 1 𝑎+𝑏 8.13. 8.14. 8.15. 8.16. 8.17. 8.18. а) ∫︀ +∞ 30 𝑝(𝑡)𝑑𝑡 = 𝑒 −3 ≈ б) такой же результат, как в «а» в) 1/𝜆 = 10 а) геометрическое распределение б) 𝐸(𝑁) б 365 · 0.00037 = Следовательно, «a», ближайшее целое равно Для Пуассоновского распределения 𝜆 = в) 𝑃 (𝑁 = 0) = 0.99963 365 ≈ г) 𝑃 (𝑁 = 2) = 𝐶 2 365 0.99963 363 0.00037 2 ≈ 𝑒 −𝜆 𝜆 2 /2 8.22. 8.23. 8.24. The probability of observing a taxi before a bus is given by 3/(3 + 2) = 3/5 since the waiting times are independent and exponentially distributed. By the memoryless property both processes then restart and hence the probability of observing (at least) two taxis before the first bus is (3/5) 2 = 9/25 . The probability of observing exactly two taxis before the first bus is (3/5) 2 * (2/5) = 18/125 Общее число бесед - Пуассоновский процесс с интенсивностью 𝑛(𝑛 − Число разглашенных бесед - Пуассоновский процесс с интенсивностью 𝑝𝑛(𝑛 − 1)𝜆 ТВИМС-задачник. Демешев Борис. Ожидаемое время до разглашения: 1 𝑛(𝑛−1)𝑝𝜆 8.28. найти 𝐸(2 𝑁 ) , вроде бы экспоненциально св) 𝑑𝑥 = (1 − 𝑥)𝑣𝑑𝑡, 𝑥(𝑡) = 1 − минимум экспоненциальных - тоже экспоненциальное, значит время будет равно 𝐸(𝑇 ) = 1 12 + 1 11 + ... + 1 по Пуассону с 𝜆 = 1 9.1. 9.2. 9.3. 9.4. 9.5. 9.6. 9.7. 9.8. 9.9. 9.10. 10.1. 11.1. a) 𝑃 ({𝑋 делится на 𝑝}) = 𝑝 −𝑠 b) Левая и правая части - это вероятность того, что число не делится ни на одно простое. Такое число одно - единица. в) 𝑃 г) Если 𝑋 делится на 𝑘, то условное распределение 𝑋 𝑘 совпадает с безусловным распределением 𝑋: 𝑃 ( 𝑋 𝑘 = 𝑛|𝑋 = 0 mod 𝑘) = 𝑛 −𝑠 /𝜉(𝑆) = 𝑃 (𝑋 = д) [better is Аналогично, можно заметить, что в случае, когда 𝑋 и 𝑌 делятся на 𝑘, то 𝑋 𝑘 и 𝑌 𝑘 будут (условно) независимы: 𝑃 ( 𝑋 𝑘 = 𝑥 ∩ 𝑌 𝑘 = 𝑦|𝑋 = 0 mod 𝑘 ∩ 𝑌 = 0 mod 𝑘) = 𝑃 ( 𝑋 𝑘 = 𝑥|𝑋 = 0 mod 𝑘)𝑃 ( 𝑌 𝑘 = 𝑦|𝑌 = 0 mod 𝑘) 𝑃 (не является общим делителем 𝑋 и 𝑌 ) = (1 − 𝑝 −2𝑠 ) 𝑃 (и 𝑌 не имеют общих делителей) = ∏︀ 𝑝 is prime (1 − 𝑝 −2𝑠 ) Вероятность того, что 𝑋 и 𝑌 делятся на 𝑘 равна Искомая вероятность равна 𝑘 −2𝑠 𝜉(2𝑠) 11.2. 11.3. Да. Впишем куб наугад. 𝐴 𝑖 = {𝑖 -ая вершина белая. 𝑃 (𝐴 𝑖 ) = и 𝑃 (∪𝐴 𝑖 ) ≤ ∑︀ 𝑃 (𝐴 𝑖 ) = Следовательно, 𝑃 (все вершины красные) ≥ 0.2, те. искомый куб существует. Множества измеримы по условию take the mean value of the quadratic function 𝑓 = ∑︀ 𝑖,𝑗 𝑎 𝑖𝑗 𝑥 𝑖 𝑥 𝑗 over the points of {−1, 1} 𝑛 That is to say, let 𝑥 𝑖 be a random variable taking -1,1 with the same probability. Now we want to find the expected value of f. This expected value is the sum of expected values of 𝑎 𝑖𝑗 𝑥 𝑖 𝑥 𝑗 for different i,j’s. Now it is easily seen that this expected value for 𝑖 ̸= 𝑗 is 0 and is 𝑎 𝑖𝑖 otherwise. So the expected value of f is just ∑︀ 𝑖 𝑎 𝑖𝑖 which is the trace of the matrix which is equal to the sum of eigenvalues. 11.5. Let’s colour the points black and white and Let the probability that a point has colour black be 𝑥. Now look at any convex polygon 𝑃 . The probability that all its vertices are black is 𝑥 𝑎(𝑃 ) and that all the points outside it are white is (1 − 𝑥) 𝑏(𝑃 ) . The given summation is the probability that there exists a polygon with vertices black and all points outside it are white. But we know this is true (i.e. 1) as all we have to do is consider the convex hull of the black points. Now this is true for all 𝑥 ∈ (0, 1). But this is a polynomial with finite degree and the number of points at which it evaluates 1 is infinite. so, this is an identity and holds ∀𝑥 ∈ R Продумать 11.6. 11.7. 11.8. 11.9. 11.10. Накроем стаканом случайную (равномерно выбираемую) область, целиком лежащую внутри квадрата ТВИМС-задачник. Демешев Борис. Ожидаемое число муравьев, попадающих в эту область равно (1 − 𝛿) · 51 · 𝜋 (︀ 1 7 )︀ 2 . Здесь 𝛿 - это площади четырех ненакрываемых кусков в углах квадрата. Получаем, что ожидаемое число больше 3-х (𝛿 = 4−𝜋 49 ). Значит, есть точка, где накроется 4 муравья. 11.11. Выберем случайную перестановку. Посчитаем среднее число неподвижных точек, получим Значит ∑︀ 𝑘 𝑘 𝑝 𝑛,𝑘 𝑛! = 1 . Отсюда ∑︀ 𝑘 𝑘𝑝 𝑛,𝑘 = Один белый отдельно, остальные 99 шаров вместе. 12.3. 12.4. 12.5. 12.6. 𝑇 𝑟𝑎𝑐𝑒(𝑀 ) = ∑︀(𝑒𝑖𝑔𝑒𝑛𝑣𝑎𝑙𝑢𝑒𝑠) , Det(M)=product(eigenvalues) For correlation matrix, since trace=N, where N dimension average(eigenvalues)=1, Then, log(Det)=sum(log(eiegenvalues))=N*average(log(eigenvalues))<=N*log(average(eigenvalues))=0, since log has negative convexity Hence Det<=1 12.7. 2, 4, 5, 3, 1 has an expected time of 13.8 minutes, this is the minimum. My algorithm for solving requires some guesswork. Take everything one decision at a time: (1) Search the first floor or go up? If I search the first floor, 95% of the time I add five minutes to my time. 5% of the time save the amount of time it takes to search floors 2 to 5. I know I can search these floors in 24 minutes, 5% of 24 is less than 95% of 5, so I skip the first floor. (2) Search the second floor or go up? If I skip this floor, I clearly have to go all the way to four, since it wouldn’t make sense to skip 2 but search 3. If I do that and don’t find her, I have to come back to 2. Doing 2 first saves four minutes of travel time and 10% of the time costs five minutes. Clearly, I search the second floor. (3) Search the third floor or go right up to four? This is a little more complicated. If I skip the third floor, I clearly have to go four, five then three. If I search the third floor, then I do four and five. Doing four and five takes 14 minutes if she is on three. Searching the third floor costs me five minutes 6/7 of the time and saves me 14 minutes 1/7 of the time; clearly I skip. (4) Now of course I search the fourth floor. After that I clearly prefer the fifth floor to the third, because it has a higher probability and the cost of skipping it is higher. On my way back down, of course I search the third floor, because it has a higher probability than one and a shorter travel time. After that, only one is left. 12.8. 12.9. 12.10. 12.11. вторым 12.12. 12.13. игра не меняется, если делать ставку на последнюю карту. все стратегии оптимальны и дают выигрыш в 1/36 Чтобы максимум 𝑝 𝑖 𝑓 (ˆ 𝑝 𝑖 ) + (1 − 𝑝 𝑖 )𝑓 (1 − был в точке ˆ𝑝 𝑖 = необходимым условием будет) = (1 − 𝑝)𝑓 ′ (1 − Если левую часть обозначить 𝑞(𝑝), то получаем уравнение 𝑞(𝑝) = 𝑞(1 − 𝑝). Берем любую функцию, симметричную относительно 1/2 (останется потом только проверить, что ˆ𝑝 𝑖 = 𝑝 𝑖 - это максимума не минимум. Например, подойдет 𝑞(𝑥) = 1, тогда получаем 𝑓(𝑥) = ln(𝑥), или 𝑞(𝑥) = 2𝑥(1 − 𝑥), тогда получаем 𝑓(𝑥) = −𝑥 2 + а) равновесия в чистых нет. б) Первый игрок должен быть безразличен между усилиями 𝑒 1 ∈ [𝑎; 𝑏] . Те. 𝑈(𝑒 1 ) = 𝑈 (𝑎) = 𝑈 (Если первый игрок выбирает уровень усилий 𝑒 1 , то он выигрывает с вероятностью ∫︀ 𝑒 1 0 𝑝(𝑡)𝑑𝑡 . Следовательно Поскольку есть стратегия 𝑒 1 = 0 , приносящая полезность 0, любая играемая стратегия должна приносить платеж не меньше 0. ТВИМС-задачник. Демешев Борис. Отсюда 𝑎 = 0 и 𝑏 = 1/ √ 2 . Взяв производную по получаем) = на отрезке [0; 1/ √ 2] 12.18. 12.19. (C.L. При любом варианте правил, Маша будет ходить белыми не больше 10 раза Саша - не больше раз. Победитель гарантированно определяется за 19 партий. Если победитель определился раньше, то дополним турнир недостающими фиктивными партиями (чтобы Саша ходил белыми ровно 9 раза Маша - ровно 10 раз. В турнире из 19 партий победителем при любом варианте правил оказывается тот, кто выиграл больше партий. Следовательно, Машины шансы не зависят от выбираемого варианта правил. 12.20. 12.21. 12.22. Решение (Выбирая свой ответ наугад, Саша может гарантировать себе вероятность выигрыша 50%. Передавая Саше ту величину, значение которой легло от далее, Маша лишает Сашу релевантной для отгадывания информации. 12.25. Заметим, что 𝑎 1 𝑏 6 + 𝑎 6 𝑏 1 < и 𝑎 1 𝑏 6 + 𝑎 6 𝑏 1 < Перемножаем два неравенства, получаем противоречие. 12.29. − 1 𝑙𝑛(1−𝑝) ≈ 1 𝑝 12.30. Дискретная, 0 и 1 с вероятностью по 0.5 12.31. центр-угол-далее по смыслу. (?). 𝑝 = а) Если текущее 𝑝 > ¯𝑝, то ищем дальше, если 𝑝 < ¯𝑝, то берем Ожидаемые общие издержки удовлетворяют уравнению 𝐶 = 𝑐 + 𝑃 (𝑝 < ¯ 𝑝)𝐸(𝑝|𝑝 < ¯ 𝑝) + (1 − 𝑃 (𝑝 < ¯ 𝑝))𝑇 Отсюда 𝑇 𝐶 Оптимальное ˆ𝑝 б (𝑝< в) В точке оптимума 𝑇 𝐶 е) Ничего. Возвращаться в уже посещенный магазин не имеет смысла при использовании оптимальной стратегии, т.к. желание вернуться означает то, что из него не надо было уходить 12.33. а) 1, 2, б) вероятности должны быть пропорциональны достоинству монет: Рассмотрим монеты достоинством 1 и 𝑛. Пусть они застревают с вероятностями 𝑝 и Ожидаемые потери от последовательности 1, 𝑛: 𝑝 + (1 − 𝑝)𝑥(1 + Ожидаемые потери от последовательности 𝑛, 1: 𝑥 + (1 − 𝑥)𝑝(1 + Они равны при 𝑥 = Если монет больше двух, а вероятности пропорциональны, то две соседние можно переставить местами, что не скажется на ожидаемых потерях 12.34. оптимальное место находится руками, 20, хотя саму вероятность выигрыша без компьютера посчитать очень трудоемко. 12.35. Например, из теоремы Малера следует, что начиная с го знака вне может идти 41𝑛 нулей подряд. Следовательно делим доллар на 10 41𝑛 − часть и ставим их на все последовательности ТВИМС-задачник. Демешев Борис. кроме последовательности из нулей. 12.36. 12.37. 12.38. да, два линейных уравнения. 12.45. 12.46. Поиск оптимальной стратегии. Если сейчас 0 долларов, то брать 1 доллар. Назовем ситуацию, шоколадной если можно выиграть без риска. Те. игр осталось больше, чем недостающее количество денег. Если игрок не в шоколаде, то оптимальным будет рисковать на первом ходе. Почему? Получение одного доллара можно перенести на попозже. В любой оптимальной стратегии достаточно одного успеха для выигрыша. Почему? Допустим стратегии необходимо два успеха в двух рискованых играх. Заменим их на одну рискованную игру. Получим большую вероятность. Оптимальная стратегия: Если сейчас 0 долларов, то брать доллар. Пусть 𝑑 - дефицит в долларах, а 𝑘 - число оставшихся попыток. Если 𝑑 ≤ 𝑘, то брать по доллару. Если 𝑑 > 𝑘, то с риском попробовать захапнуть 1 + 𝑑 − 𝑘 долларов = 2 , Порог 𝑇 , 𝐺 𝑇 = ∫︁ 𝑇 𝑥=0 (︂∫︁ 1−𝑥 𝑦=0 (𝑥 + 𝑦) 𝑑𝑦 )︂ 𝑑𝑥 + ∫︁ 1 𝑥=𝑇 𝑥 𝑑𝑥 , 𝐺 𝑇 = 1+𝑇 −𝑇 2 − 𝑇 3 Источник Ferguson?. 12.50. Равновмерно на [0; 2], гарантирует выигрыш с вероятностью не менее 0.5. 13.1. 13.2. 13.3. 13.4. 13.5. 13.6. 13.7. 13.8. 13.9. 13.10. 13.11. 13.12. 13.13. 13.14. 13.15. 13.16. 13.17. 13.18. 13.19. 13.20. 13.21. 13.22. 13.23. 13.24. 13.25. 13.26. 13.27. 13.28. 13.29. 13.30. 13.31. 13.32. 13.33. 13.34. 13.35. 13.36. 13.37. 13.38. 13.39. 13.40. 13.41. 13.42. 13.43. 13.44. 13.45. 13.46. 13.47. 13.48. 13.49. 13.50. 13.51. 13.52. 13.53. 13.54. 13.55. 13.56. 13.57. 13.58. 13.59. 13.60. 13.61. Ответ независимы стандартные нормальные. 13.64. 13.65. 13.66. 14.1. 14.2. 14.3. 14.4. 14.5. 14.6. 14.7. 14.8. 14.9. 14.10. 14.11. 14.12. 14.13. 14.14. 14.15. 14.16. 14.17. 14.18. 14.19. 14.20. 14.21. 14.22. 14.23. 14.24. 14.25. вроде бы 𝑎 1 = 𝜎 2 2 𝜎 2 1 +𝜎 2 и 𝑎 2 = 1 − 𝑎 1 14.28. Только если 𝑋 с вероятностью 1 константа. 𝐸(𝑃 ) = 10. 𝐸( 10 𝑃 ) ≥ 1 . Заплатили поровну, купила больше баба Катя. 𝐸(𝑃 + 10) = 10𝐸(1 + 10 𝑃 ) . 𝐸( 10 𝑃 ) ≥ 1 . 𝐸(𝑃 ) ≥ 10. Заплатила больше баба Маша, купила больше баба Катя) = 3 − 4𝜃 , 𝜃 ∈ [0; 1/3], 𝜃 𝑚𝑎𝑥 = 0 , 𝜃 𝑚𝑖𝑛 = 1/3 только линейной. Рассмотрите случайную величину, которая принимает значение 𝑥 с вероят- ТВИМС-задачник. Демешев Борис. roah@yandex.ru 138 ностью 𝑝 и значение 𝑦 с вероятностью (1 − 𝑝) 14.38. 𝑉 𝑎𝑟(𝑋) 15.1. 15.2. 15.3. 15.4. 𝑝 𝑌 (𝑡) при 𝑡 ∈ [0; 1] 15.5. 15.6. 15.7. 15.8. 15.9. 15.10. 15.11. 15.12. 15.13. 15.14. 15.15. 15.16. 15.17. 15.18. 15.19. 15.20. 15.21. 15.22. 15.23. 15.24. 15.25. 15.26. 15.27. 15.28. 15.29. 15.30. a) 1 𝑛+1 b) 𝑛 𝑛+1 c) intuition 𝑐 1 𝑛+1 + 1 𝑛 𝑛+1 15.31. 15.32. 15.33. 15.34. 15.35. 15.36. 15.37. 15.38. 15.39. 15.40. 16.1. 16.2. 16.3. 𝑓 (𝑎, 𝑏) = 𝑎!𝑏! (𝑎+𝑏+1)! 16.4. 16.5. 16.6. 16.7. 16.8. 16.9. 16.10. 16.11. 17.1. 17.2. 17.3. 17.4. 17.5. 17.6. 17.7. 17.8. 17.9. 17.10. 17.11. 17.12. 17.13. The covariance of X and Y is -1/18. The expected value of X, conditional on Y, is (2-Y)/5. 17.14. 0.5𝑟 · √ 0.21 + 0.35 геометрически с 𝑝 = 1 − (1 − 𝑝 1 ) · (1 − 𝑝 2 ) 17.16. 17.17. 17.18. 17.19. 17.20. 17.21. 17.22. 17.23. 17.24. 17.25. 17.26. 17.27. 17.28. 17.29. 𝑝(𝑡) если 𝑡 ∈ [0; 1) 2 − если 𝑡 ∈ [1; иначе ТВИМС-задачник. Демешев Борис. roah@yandex.ru 139 𝑝(𝑡) = {︂ если 𝑡 ∈ (0; иначе. Нет, на квадрате появляется противоречие) = 𝑎 3 2 𝑦 2 𝑒 −𝑎𝑦 , 𝑦 > 0 𝐸(𝑌 ) = 3/𝑎 𝐸(𝑋|𝑌 = 1) = 1 17.37. 17.38. Найдем функцию плотности ln(𝑋 𝑖 ) . Окажется, что это экспоненциальное распределение с параметром 𝜆 = Находим закон распределения суммы Находим закон распределения 𝑌 . Source: aops, t=187872 17.40. 17.41. 17.42. 17.43. зависимы, но корреляция равна нулю. 17.47. 17.48. 17.49. равномерно :) Here is my intuitive explanation: Let 𝑊 = 𝑙𝑜𝑔(𝑋𝑌 ) 𝑍 = 𝑍(𝑙𝑜𝑔𝑋 + 𝑙𝑜𝑔𝑌 ) We know that the log of a uniform random variable has the distribution of an exponential random variable, therefore 𝑙𝑜𝑔𝑋 and 𝑙𝑜𝑔𝑌 are 2 independent expo(1). We can interpret them as the inter-arrival times of a poisson process with intensity 1. Given the second arrival time T2 of a PP(1), it is well known that the first arrival time is uniformly distributed between 0 and T2. An other way of saying that is 𝑍 * 𝑇 2 has the same distribution of the first arrival time of a PP(1), with Z Unif[0,1]. We can conclude by seeing that 𝑇 2 = 𝑙𝑜𝑔𝑋 + 𝑙𝑜𝑔𝑌 , so 𝑊 = 𝑍 * 𝑇 2 is an exponential random variable with parameter1 (first arrival of a PP(1)) and 𝑒 𝑊 is Unif[0,1] 17.52. 17.53. 17.54. possible solution: If X,Y are uniform in [0, 1], we can define a new variable 𝑍 that maps Y into the interval [0, 𝑎] if 𝑋 < 𝑎, and into the interval [𝑎, 1] if 𝑋 > 𝑎. The joint distribution for 𝑋, 𝑍 is uniform in the subregion defined by the squares ((0, 0)(𝑎, 𝑎)) and ((𝑎, 𝑎)(1, 1)). The marginal distribution for 𝑍 is obviouly uniform in [0, 1] [ edited: I thought the problem was to get a correlation equal to 0.5] 𝑎 = 0.5 gives the maximum correlation that can be produced using this method (𝜌 = 0.75). The 𝑅 function below calculates the correlation for a given a (it’s straightforward to solve it analytically, but I didn’t do it for the general case). If the desired correlation is over 0.75, the method can be generalized to get a joint distribution consisiting of more than two squares along the diagonal ((0, 0)(𝑎, 𝑎)) ((𝑎, 𝑎)(𝑏, 𝑏))......((𝑧𝑧)(11)). In the limit, the joint distribution collapses to the diagonal (perfect correlation, 𝑋 = 𝑍). Solution2: Let 𝑌 2 = 𝑌 if 𝑋 ≤ (1 + 𝑝)/2, or 1 − 𝑌 otherwise. 𝑌 2 is still 𝑈[0, 1] and 𝐶𝑜𝑟𝑟(𝑌, 𝑌 2 ) = Ответ да, только если 𝑋 = 𝑌 тождественно (через дисперсию суммы) 17.56. а) 𝑝 = б) при любых допустимых в) 𝐸(𝑋) = 1 + 𝑝/2, 𝑉 𝑎𝑟(𝑋) = (5−𝑝)𝑝 4 , 𝐶𝑜𝑣(𝑋, 𝑌 ) = −(4 + 𝑝)𝑝/4, 𝐶𝑜𝑟𝑟(𝑋, 𝑌 ) = −(4 + 𝑝)/(5 − г) −4/5 ТВИМС-задачник. Демешев Борис. д) Вася неправ. Даже если две величины равны с вероятностью 0.9999, ас вероятностью изменяются в противоположном направлении, то корреляция будет сильно отрицательной. 17.57. Ответ: 𝑞𝑝 𝑛−1 1−𝑝 𝑛+1 Да. Т.к. 𝐶𝑜𝑣(𝑎𝑋 +𝑏, 𝑐𝑌 +𝑑) = 𝑎𝑐𝐶𝑜𝑣(𝑋, 𝑌 ), то можно считать, что 𝑋 и 𝑌 принимают значения только 0 и 1. Тогда 𝐸(𝑋𝑌 )−𝐸(𝑋)𝐸(𝑌 ) = 𝑃 (𝑋 = 1∩𝑌 = 1)−𝑃 (𝑋 = 1)𝑃 (𝑌 = 1) = 0. Что и означает независимость · 0.003 · 0.001 17.62. I additionally assume that the first non-zero 𝑋 𝑡 is equally likely to be 1 or −1. So: 𝐸(𝑋 𝑡 ) = 0 for all 𝑡. 𝑉 𝑎𝑟(𝑋 𝑡 ) = 𝐸(𝑋 2 𝑡 ) = 𝐸(𝐸 2 𝑡 ) = 𝐸(𝐸 𝑡 ) = 1/2 𝐶𝑜𝑣(𝑋 𝑡 , 𝑋 𝑡−𝑘 ) = 𝐸(𝑋 𝑡 𝑋 𝑡−𝑘 ) = 𝑃 (𝑋 𝑡 = 1 ∩ 𝑋 𝑡−𝑘 = 1) + 𝑃 (𝑋 𝑡 = −1 ∩ 𝑋 𝑡−𝑘 = −1) − 𝑃 (𝑋 𝑡 = −1 ∩ 𝑋 𝑡−𝑘 = 1) − 𝑃 (𝑋 𝑡 = 1 ∩ 𝑋 𝑡−𝑘 = −1) 𝑃 (𝑋 𝑡 = 1 ∩ 𝑋 𝑡−𝑘 = 1) = 𝑃 (𝑋 𝑡 = 1 ∩ 𝑋 𝑡−𝑘 = 1|𝐸 𝑡 = 1 ∩ 𝐸 𝑡−𝑘 = 1) · 𝑃 (𝐸 𝑡 = 1) · 𝑃 (𝐸 𝑡−𝑘 = 1) = 1/4 · 𝑃 (𝑋 𝑡 = 1 ∩ 𝑋 𝑡−𝑘 = 1|𝐸 𝑡 = 1 ∩ 𝐸 𝑡−𝑘 = 1) The first probability (𝑃 (𝑋 𝑡 = 1 ∩ 𝑋 𝑡−𝑘 = 1|𝐸 𝑡 = 1 ∩ 𝐸 𝑡−𝑘 = 1) ) is the probability that during 𝑘 − 1 trial between time 𝑡 and time 𝑡 − 𝑘 there will be an odd number of 𝐸 𝑛 = 1 . This probability is equal to 0.5 for all 𝑘 > 1 and it is equal to 0 for 𝑘 = 1. As a final result we get: 𝐶𝑜𝑣(𝑋 𝑡 , 𝑋 𝑡−𝑘 ) = 0 for 𝑘 > 1 𝐶𝑜𝑣(𝑋 𝑡 , 𝑋 𝑡−1 ) = −1/4 for 𝑘 = 1 And 𝐶𝑜𝑟𝑟(𝑋 𝑡 , 𝑋 𝑡−𝑘 is equal to 0 (for 𝑘 > 1) and −1/2 (for 𝑘 = да, могут! 17.64. например, подойдет 𝑍 = 2𝑋𝑌 − Для любых 𝑎 и 𝑏: (𝑓(𝑎)−𝑓(𝑏))(𝑔(𝑎)−𝑔(𝑏)) ≥ 0. Что равносильно условию 𝑓(𝑎)𝑔(𝑎)+𝑓(𝑏)𝑔(𝑏) ≥ 𝑓 (𝑎)𝑔(𝑏) + 𝑓 (𝑏)𝑔(𝑎) . Берем две независимые одинаково распределенные случайные величины 𝑋 и. Получаем 𝐸(𝑓(𝑋)𝑔(𝑋)) + 𝐸(𝑓(𝑌 )𝑔(𝑌 )) ≥ 𝐸(𝑓(𝑋)𝑔(𝑌 )) + 𝐸(𝑓(𝑌 )𝑔(𝑋)). Пользуясь одинаковостью распределения и независимостью 𝐸(𝑓(𝑋)𝑔(𝑋)) ≥ 𝐸(𝑓(𝑋))𝐸(𝑔(𝑋)), что и означает положительность ковариации. 18.1. 𝑝 𝑄 (𝑡| монетка выпала орлом ) = 𝑐 · 𝑡 · Находим 𝑃 (𝑄 ≤ 𝑡| монетка выпала орлом ), причем на знаменатель не обращаем внимания, т.к. он- константа. Затем берем производную. Или интуитивно (𝑎, 𝑏) = 𝑎!𝑏! (𝑎+𝑏+1)! ТВИМС-задачник. Демешев Борис. roah@yandex.ru 141 𝑃 𝑟𝑜𝑏 = 𝑓 (𝑘 + 1, 𝑛 − 𝑘)/𝑓 (𝑘, 𝑛 − 𝑘) = 𝑘+1 𝑛+2 18.4. 18.5. 18.6. source: aops, t=173773, imho: 5( √ 2−1) 6 19.1. да 19.2. да 19.3. нет 19.4. да 19.5. нет 19.6. нет 19.7. нет 19.8. да 19.9. нет 19.10. да 19.11. да 19.12. да 19.13. да 19.14. нет 19.15. да 19.16. нет 19.17. да 19.18. нет 19.19. да 19.20. да 19.21. да 19.22. нет 19.23. нет 19.24. да 19.25. да 19.26. да 19.27. нет 19.28. 19.29. нет 19.30. да 19.31. нет 19.32. нет 19.33. да 19.34. нет 19.35. верно 19.36. нет 19.37. 19.38. 19.39. нет 19.40. нет 19.41. нет 19.42. 19.43. нет 20.1. 20.2. 20.3. 20.4. 20.5. 20.6. 20.7. 20.8. 20.9. The largest piece is uniform on [1/2, 1], hence the mean is 3/4. ТВИМС-задачник. Демешев Борис. roah@yandex.ru 142 More general problem. Let 𝑍 𝑛 be the smallest piece arising from breaking up the unit stick into n pieces using (𝑛 − 1) uniform [0, 1] random variables, say, 𝑋 1 , 𝑋 2 , ..., 𝑋 𝑛−1 . We prove that 𝑃 (𝑍 𝑛 > 𝑐) = (1 − 𝑐𝑛) 𝑛−1 Ror any 0 < 𝑐 < 1/𝑛, because then a simple integration by parts calculation shows that 𝐸(𝑍 𝑛 ) = 1/𝑛 2 To prove this equality, it’s enough to show that 𝑃 (𝑋 1 < 𝑋 2 < · · · < 𝑋 𝑛−1 , 𝑍 𝑛 > 𝑐) = (1 − 𝑐𝑛) 𝑛−1 (𝑛 − 1)! Because then I can multiply by (𝑛 − 1)!, i.e., adding this last probability over all possible orderings of the 𝑋 𝑖 ’s, to cover all possibilities for {𝑍 𝑛 > 𝑐} . Now finally, how do I prove this last equality? Saying that 𝑋 1 < ... < 𝑋 𝑛 and 𝑍 𝑛 > 𝑐 is equivalent to saying that 𝑋 1 > 𝑐, 𝑋 2 − 𝑋 1 > 𝑐, 𝑋 3 − 𝑋 2 > 𝑐, . . . , 𝑋 𝑛−1 − 𝑋 𝑛−2 > 𝑐, 1 − 𝑋 𝑛−1 > 𝑐. which is equivalent to the following range for the 𝑋 𝑖 ’s: 𝑋 1 ∈ (𝑐, 1 − (𝑛 − 1)𝑐)𝑋 2 ∈ (𝑋 1 + 𝑐, 1 − (𝑛 − 2)𝑐) . . . 𝑋 𝑘 ∈ (𝑋 𝑘−1 + 𝑐, 1 − (𝑛 − 𝑘)𝑐) . . . 𝑋 𝑛−1 ∈ (𝑋 𝑛−2 + 𝑐, 1 − 𝑐) So now, to calculate the probability of this event, we just have to integrate 1 over this region, i.e., we have to calculate the integral ∫︁ 1−(𝑛−1)𝑐 𝑐 ∫︁ 1−(𝑛−2)𝑐 𝑥 1 +𝑐 · · · ∫︁ 1−𝑐 𝑥 𝑛−2 +𝑐 1 𝑑𝑥 𝑛−1 𝑑𝑥 𝑛−2 · · · 𝑑𝑥 1 To work through the first (n-2) integrals, we note the following equality for any 2 <= k <= n-1, which is simple enough to calculate: ∫︁ 1−(𝑛−𝑘)𝑐 𝑥 𝑘−1 +𝑐 (1 − (𝑛 − 𝑘)𝑐 − 𝑥 𝑘 ) 𝑛−(𝑘+1) (𝑛 − (𝑘 + 1))! 𝑑𝑥 𝑘 = (1 − (𝑛 − (𝑘 − 1))𝑐 − 𝑥 𝑘−1 ) 𝑛−𝑘 (𝑛 − 𝑘)! Then using induction, the integral we want to calculate reduces to integrating ∫︁ 1−(𝑛−1)𝑐 𝑐 (1 − (𝑛 − 1)𝑐 − 𝑥 1 ) 𝑛−2 (𝑛 − 2)! 𝑑𝑥 1 But if we think of 𝑥 0 as equalling 0, then the integral equality above for 2 ≤ 𝑘 ≤ 𝑛 − 1 also applies when 𝑘 = 1 , which gives us the answer we want. The size of the kth largest piece is given by 1 𝑛 (︂ 1 𝑛 + · · · + 1 𝑘 )︂ 20.10. Recall that the problem concerns a prudent shopper who tries, in several attempts, to collect a complete set of 𝑁 different coupons. Each attempt provides the collector with a coupon randomly chosen from 𝑁 known kinds, and there is an unlimited supply of coupons of each kind. It is easy to estimate the expected waiting time to collect all 𝑁 coupons: 𝐸{ time to collect 𝑁 coupons} = 1 + 𝑁 𝑁 − 1 + 𝑁 𝑁 − 2 + · · · + 𝑁 = 𝑁 (log 𝑁 + 𝛾 + 𝑂( 1 𝑁 )) Similarly, 𝐸{ time to collect 𝑁 2 coupons} = 1 + 𝑁 𝑁 − 1 + 𝑁 𝑁 − 2 + · · · + 𝑁 𝑁 2 + 1 = 𝑁 (log 𝑁 + 𝛾 + 𝑂( 1 𝑁 ) − log 𝑁 2 − 𝛾 − 𝑂( 1 𝑁 )) = 𝑁 log 2 + 𝑂(1) ТВИМС-задачник. Демешев Борис. roah@yandex.ru 143 20.11. a) For 𝑈[0, 20] distribution we have as follows. Consider the lengths of times between buses, 𝑋, this has density 𝑝 𝑋 (𝑥) = 1/20 on [0, 20] and 0 outside [0, 20] interval. The probability density that you pick an interval of length 𝑡 is, 𝑝{ pick an interval of time 𝑥} = 𝑥𝑝(𝑥)/𝐸𝑋 = 𝑡/100. The mean picked interval is thus 40/3 (take the expectation of the above density). But, when you pick an interval, the time you wait until the end of it is on average half of it’s length, which gives the answer 20/3. b) We use the basic property of Poisson processes with intensity 𝜆 = 1/10 that time interval between two arrivals is distribited as 𝐸𝑥𝑝(𝜆). Using the "lost of memory"property of Exponential random variable 𝑝{𝑋 > 𝑦 + 𝑥|𝑋 > 𝑦} = 𝑝{𝑋 > 𝑥}. we get that the average time till the next arrival is 10 min. Another funny consequence here is that even after waiting 10 mins, the expected time is still unchanged: it’s 10 more minutes Решение 1. The answer is 1/13. We have 𝑝{𝑌 𝑛 divides 13} = 1/6 · (𝑝{𝑌 𝑛−1 + 1 divides 13} + 𝑝{𝑌 𝑛−1 + 2 divides 13} + · · · + 𝑝{𝑌 𝑛−1 + 6 divides 13}). The probability above is conditional to the first roll. Let 𝑈 𝑛 (0) = 𝑝{𝑌 𝑛 divides 13}, 𝑈 𝑛 (1) = 𝑝{𝑌 𝑛 +1 divides 13}, . . . , 𝑈 𝑛 (𝑘) = 𝑝{𝑌 𝑛 +𝑘 divides 13} for 𝑘 = 0, . . . , 12. We have 𝑈 𝑛 (0) = 1/6 · (𝑈 𝑛−1 (1) + · · · + 𝑈 𝑛−1 (6)), 𝑈 𝑛 (1) = 1/6 · (𝑈 𝑛−1 (2) + ... + 𝑈 𝑛−1 (7)), etc... We also have 𝑈 𝑛 (0) + · · · + 𝑈 𝑛 (12) = 1 . When 𝑛 → ∞, assuming that 𝑈 𝑛 (𝑘) converge to 𝑈(𝑘), we have: 𝑈 (0) = 1/6 · (𝑈 (1) + ... + 𝑈 (6)), etc... 𝑈(0) + ... + 𝑈(12) = 1. So 𝑈(𝑘) = 1/13 for all 𝑘 = 0, . . . , Решение 2. Consider a 13×13 matrix, such that the (𝑖, 𝑗) entry represents the following probability: given that 𝑌 𝑛 is 𝑖 mod 13, what is the probability that 𝑌 𝑛+1 is 𝑗𝑚𝑜𝑑13? This matrix is «doubly stochastic» because the sum of the entries in each row and each column equals 1. This means that the asymptotic distribution of 𝑌 𝑛 is uniform. This requires some application of Markov chain theory. 20.13. The probability that 𝑛 uniform random variables sum to less than 1 is 1/𝑛!. So the probability the sum goes over 1 for the first time on the 𝑛-th random variable is 1/(𝑛 − 1)! − 1/𝑛! = (𝑛 − 1)/𝑛!. The expectation is the sum of 𝑛 * (𝑛 − 1)/𝑛! = 1/(𝑛 − 2)! for 𝑛 = 2 to ∞, which equals the sum of 1/𝑛! for 𝑛 = 0 to infinity, which is 𝑒. Without much ado, here is a simpler and more general solution. Let 𝑓(𝑥) be the expectation of number of steps for exiting barrier 𝑥 for the first time. The expectation that is in excess of 1 is ∫︁ 𝑥 0 𝑝(𝑡)𝑓 (𝑥 − 𝑡) 𝑑𝑡 = 𝑓 (𝑥) − 1 where 𝑝(𝑡) is the characteristic function for (0, 1). For 𝑥 ≤ 1, we differentiate the above integral equation with respect to 𝑥 and turn it into a simple differential equation. Solve it, we have 𝑓 (𝑥) = 𝑒 𝑥 , 𝑥 ∈ [0, 1]. For 𝑥 > 1, we have ∫︁ 𝑥 𝑥−1 𝑓 (𝑡) 𝑑𝑡 = 𝑓 (𝑥) − 1. Differentiate and solve, we obtain the following recursive solution. 𝑓 (𝑥) = 𝑒 𝑥 (︀𝑓 (𝑛) − ∫︁ 𝑥 𝑛 𝑒 −𝑡 𝑓 (𝑡 − 1) 𝑑𝑡 )︀, 𝑥 ∈ (𝑛, 𝑛 + 1], 𝑛 ∈ N. ТВИМС-задачник. Демешев Борис. roah@yandex.ru 144 20.14. answer=1/2 20.15. Expected time to get N consecutive heads = Expected time to get (N-1) consecutive heads + p*1 + (Expected time to get N consecutive heads + 1)(1-p) letting 𝑥 𝑛 be the obvious (expected time for n consecutive heads): 𝑥 𝑛 = 𝑥 𝑛−1 + 𝑝 + (𝑥 𝑛 + 1)(1 − 𝑝) so we get a recursion relation 𝑥 𝑛 = (1/𝑝)(𝑥 𝑛−1 + 1) since 𝑥 1 = 1/𝑝 , we can find all the others. This then seems to be the recursion relation for 𝑥 𝑛 = ∑︀(1/𝑝 𝑖 ) , where the sum is between i=1 and i=n 20.16. tie is inevitable 20.17. 20.18. we can also use the reflective principle to do the counting. let N be the difference of games brazil won and german won, the status of the game can be denoted as a pair (t, N). given brazil won the first game, the number of total possible paths is C(13,6). the game starts at (1,1), and ends at (14,2). we need to count all paths from (1,1) to (14,2) without touching N(t)=0. consider the complement and use N=0 as the mirror, all paths ever reach N=0 can be viewed as starting from (1,-1) and ending at (14,2), i.e., the number of such paths is C(13,5). so the probability is 1 - C(13,5) / C(13,7) = 1/4 I don’t know if this is legit but could you do it this way. ordinarily, if you just know the results of two candidates the probability that the winner was always ahead is (w-l)/(w+l) where w = number votes for the winner, and l= number votes for the loser. So in this case we have (8-6)/(8+6) = 2/14 however we are given that brazil won the first game. so if we divide 2/14 by the conditonal probability of brazil winning the first game we get (2/14)/(8/14) = 1/4 20.19. 20.20. One can attack this successfully from a high level, with only a few key observations: 1) For any dt > 0 and any t, the conditional probability that, for example, X1 > t + dt given that X1 > t, is a function of dt alone, i.e., independent of t. This should be clear without writing anything down. 2) Distributions satisfying this criterion can be shown to be exponential. 20.21. A sequence p(n) is defined using recurrent formula: p(n) = 1/k*sum(over j=1, j=k)(p[n-j]) (1) The values of p[1]...p[k] represent the initial conditions. The sequence converges to some finite value at n->+inf for any initial conditions (easy). k=6, p[1]=p[2]=...=p[5]=0, p[6]=1 is a special case for the probability to get cumulative sum "n"in a process of throwing a dice. We shall prove a general formula that p(n) -> 2/(k+1) at n->inf if p[1]=...p[k-1]=0, p[k]=1 ТВИМС-задачник. Демешев Борис. roah@yandex.ru 145 Proof: —— Let’s denote 𝑆 𝑚 to be the sequence (1) with initial conditions: p[1]=...p[m-1]=0, p[m]=1, p[m+1]=...p[k]=0 In that notation, we are proving that 𝑆 𝑘 − > 2/(𝑘 + 1) First, notice that 𝑆 1 − > 1/𝑘 * 𝑆 𝑘 Now suppose we have proven that 𝑆 𝑚 − > 𝑚/𝑘 * 𝑆 𝑘 . Consider a new sequence 𝑆 ( 𝑚 + 1) − 𝑆 𝑚 (i.e. every element is a difference of the corresponding elements). That new sequence has the following initial conditions: 0,0,...-1,1,..0 p[1]=...p[m-1]=0, p[m]=-1, p[m+1]=1, p[m+2]=...p[k]=0 It is easy to see that (𝑆 ( 𝑚 + 1) − 𝑆 𝑚 )− > 1/𝑘 * 𝑆 𝑘 , i.e. 𝑆 ( 𝑚 + 1)− > (𝑚 + 1)/𝑘 * 𝑆 𝑘 and therefore 𝑆 𝑚 − > 𝑚/𝑘 * 𝑆 𝑘 for any m = 1,...k Now consider a sequence 𝑆 1 + ... + 𝑆 𝑘 , i.e. the one having initial conditions p[1]=...p[k]=1 That sequence is stable and equal to 1 at every point. At the same time, that one also converges to (1/𝑘 + 2/𝑘 + ...𝑘/𝑘) * 𝑆 𝑘 = (𝑘 + 1)/2 * 𝑆 𝑘 We obtain: (𝑘 + 1)/2 * 𝑆 𝑘 − > 1 i.e. 𝑆 𝑘 − > 2/(𝑘 + 1) Q.E.D. A special case k=6: p(n) -> 2/7 = 1/3.5 20.22. You need one more thing to solve this problem, the price of bets on individual coin flips. If we assume that we can bet $1 at even money, so we win $1 if a flip is heads and pay $1 if the flip is tails, then we have the following values: 4 - 0 $2 3 - 1 $1 2 - 2 $0 1 - 3 $0 0 - 4 $0 3 - 0 $1.50 2 - 1 $0.50 1 - 2 $0 0 - 3 $0 2 - 0 $1 1 - 1 $0.25 0 - 2 $0 1 - 0 $0.625 0 - 1 $0.125 0 - 0 $0.375 The delta is $0.25, that’s how much you bet on the first flip to replicate the option Yes we need to bet $0.25 on the first flip, but the delta of the option is 0.5. After the first flip, the underlying will be worth $2.5 or $1.5, whereas the call will be worth $0.625 or $0.125. So Delta=0.5. Your answer is better than mine. You have computed the delta with respect to the underlying, which is the sum four coin flips. I was computing it with respect to a security that pays $1 for heads and -$1 for tails. So you would buy half a coin flip, paying $0.25 to get $0.50 for a head and $0.00 for a tail. I would buy one-quarter of a $1 coin flip bet, getting $0.25 for a head and paying $0.25 for a tail. It all comes out to the same thing, but it shows you always have to be careful what your delta is computed with respect to. ТВИМС-задачник. Демешев Борис. roah@yandex.ru 146 —————————————- Markov property - стоит взглянуть (считается интеграл с броуновским движением Puzzle - соотношение скоростей плавающего и ловящего в круглом бассейне expectation problem - складываем равномерные величины, до тех пор пока сумма не станет больше 1 Series gamble - возможно неплохая задача shrimp by the pound - возможно неплохая задача color hats puzzle - решение задачи для одновременного раздавания шляп tossing - про сходимости по вероятности - хорошая задача is full of error and round Для стохана - поиграться с какой-нибудь функцией типа реализации броуновского движения (попробовать посчитать от нее обычный интеграл, построить график Там есть пробабилити: http://www.mathematik.uni-bielefeld.de/ И досмотреть пункты 4, 6, 19 (8 возможно досмотреть) настоит посмотреть theorem and its extensions aops: 88391 97893, 88977 - максимизация ожидаемого сохраненного броска - expected number of rounds 152421 - про экспоненциальное распределение с страховые случаи - первый шаг http://home.att.net/ Общее решение задачи про взвешивания Dilemma by William Poundstone про "случайные"с т.зр. человека последовательности : //𝑤𝑤𝑤.𝑤𝑖𝑙𝑚𝑜𝑡𝑡.𝑐𝑜𝑚/𝑚𝑒𝑠𝑠𝑎𝑔𝑒𝑣𝑖𝑒𝑤.𝑐𝑓 𝑚?𝑐𝑎𝑡𝑖𝑑 = 26&𝑡ℎ𝑟𝑒𝑎𝑑𝑖𝑑 = Водной пачке 56 эмэндэмсин (50 грамм. Эмэндэмсины бывают 5 цветов (не считая синего math nerds would call 2 500 finite (Leonid Levin) topic "simple про игру вполовину от среднего и эмпирические данные worked this out as follows. I think this might be more or less along the lines of Robert Israel’s ТВИМС-задачник. Демешев Борис. roah@yandex.ru 147 analysis, which I didn’t actually follow, sad to say. We just analyze one color, looking at the cases where it "wins". After any draw and replacement ( a turn,) the urn is in a state i with i red balls. Of course, red (say) "wins"if i = N, and "loses"if i=0. We note that the probablity of going from i -> i+1 ( i>0>N ) is always equal to the probability of i -> i-1, and this value is i*(N-i)/(N-1)/N. So, we can treat this as a random walk starting from 1 with absorbtion at i=0 and i=N. However,the expected number of turns per step depends on the state i and is given by N*(N-1)/i/(N-i)/2 . The 2 is because of the 2 equally probable transitions. Now we just have to evaluate the average number of visits to each state 0I get that the probability of winning starting from state i is just i/N, and I get for the average number of visits to a state, starting from i=1, ( win or lose ) 2*(N-i)/N. Then the average number of "winning visits"to state i per trial is : i/N * 2*(N-i)/N and the average number of turns accounted for by these visits is: i/N * 2*(N-i)/N * N*(N-1)/i/(N-i)/2 = (N-1)/N but the expected number of wins is 1/N times the number of trials, so the expected number of turns per winning trial in each state i is just N-1, and the expected number of turns in a win for red, or for any other color, is just the number of nonterminal states times the expected number of turns in each state, or (𝑁 − 1) 2 Lew Mammel, Jr. Yes, it is (𝑁 − 1) 2 . Let Si be the event that the balls eventually end up all coloured i, and Ii the indicator of this event (1 if it occurs and 0 if not). Let T be the number of steps until all balls are the same colour. Let Ai be the initial number of balls of colour i (1 in the problem as given, but let it vary). Let V(k) = E(I1 T | A1=k) (as far as the random variable I1 T is concerned, it doesn’t matter what Ai is for i <> 1). We have V(0) = V(N) = 0. Note that E(I1 | A1=k) = k/N. By a "first-step analysis you can show that 𝑉 (𝑘) = 𝑘𝑉 (1) − (𝑁 − 1) ∑︀ 𝑘−1 𝑗=1 𝑗/(𝑁 − 𝑘 + 𝑗) for 1 <= k <= N For 𝑘 = 𝑁 this means 𝑉 (1) = (𝑁 − 1) 2 /𝑁 But what we want is 𝐸(𝑇 |𝐴1 = 𝐴2 = ... = 𝐴𝑁 = 1) = ∑︀ 𝑁 𝑖=1 𝐸(𝐼𝑖𝑇 |𝐴𝑖 = 1) = 𝑁 𝑉 (1) = (𝑁 − 1) 2 Also interesting: if N is even and you start out with 2 balls of each of N/2 colours, the expected number of steps is (𝑁 − 1) 2 − 𝑁/2 – Robert Israel 20.24. Solution: If 𝐸[𝑇 (𝑥)] were finite then according to Wald identities E[W(T(x))] were equal to 0, but it is not as W(T(x)) = x. Now let’s say that you don’t know the stochastic process followed by the stock price, but you are given the price at time 0 of a call with expiration at time T as a function of the strike X (that is, given any input strike from 0 to infinity, you can output the price of the call). How would you use this information to determine the risk-neutral distribution of the stock price at time T? Let f(S, T) be the terminal risk-neutral pdf for the stock - the 2nd deriv of the call price with respect to strike is the discounted value of the risk-neutral pdf: C”(K, T) = exp(-rT) f(K, T) This is the well-known Breeden-Litzenberger result 20.25. Identify the loop of string with real numbers from the unit interval in the obvious way. We may assume without loss of generality that one of the cuts is at 0. Fix n. Let x be the expected size of the smallest piece. Let f(t) be the probability that the smallest piece has size at least t. Note it is easy to see that x equals the integral from 0 to 1 of f(t). We claim f(t)=(1-n*t)**(n-1) for t < 1/n, f(t)=0 otherwise. Let t < 1/n. Then we assert configurations of n points on a unit loop of string (with one point at 0) such that cutting at those points produces pieces of length at least t correspond to configurations of n ТВИМС-задачник. Демешев Борис. roah@yandex.ru 148 points on a loop of length (1-n*t) with one point at 0 and otherwise unrestricted. This follows because the configurations map into each other by deleting (or adding) length t of string after each point. The claim follows because the density of the configurations on the shorter loop is (1-n*t)**(n-1). We can now compute x by evaluating the integral of (1-n*t)**(n-1) with respect to t from 0 to 1/n. Let s=(1-n*t). Then the integral becomes (1/n)*(integral from 0 to 1 of s**(n-1) with respect to s) or 1/(n*n). So the answer to the first part is 1/(n*n). The answer to the second part can be derived by applying the above idea recursively and inductively. Consider a set of n points on an unit loop (with one point at 0). Let x be the size of the smallest piece if the loop is cut at those points. Consider deleting a piece of length x after each point. This will merge two of the points leaving a set of (n-1) points on a loop of length (1-n*x) (still with one point at 0). Since the expected size of x is 1/(n*n) the expected size of the smaller loop is (1-1/n). Let y be the size of the smallest piece when the smaller loop is cut at the remaining points. Clearly the expected size of y=(1-1/n)*(1/(n-1)*(n-1))=1/(n*(n-1)). Now the size of the second smallest piece in the original loop is x+y which has expected size (1/n)*((1/n)+(1/(n-1)). Continuing in this way we find that the expected size of the kth smallest piece is (1/n)*((1/n)+(1/(n-1))+ ... +(1/(n-k+1))). (Note the sum of the expected sizes of all the pieces is 1 as expected.) So the expected size of the largest piece (ie the nth smallest piece) is (1/n)(1+1/2+...+1/n). It is well known that the sum (1+1/2+...+1/n) behaves like ln(n) for large n so asymptotically the largest piece has size ln(n)/n. 20.26. The volume of a hypersimplex is 1/n!. Proceed by induction on n. Assume an n dimensional simplex, where all n variables are greater than 0, and their sum is less than 1. Let x be the variable of integration in a nested integral. At the floor, when x = 0, we find a simplex in n-1 dimensions. Its volume is 1/(n-1)!. As we move along the x axis, the sum of the remaining variables is restricted to 1-x, rather than 1. This is a scaled version of the n-1 simplex. When all the variables are scaled by a factor of k, the volume is multiplied by kn-1. Thus the volume of the simplex on the floor is multiplied by (1-x)n-1. The integrand is therefore (1-x)n-1/(n-1)!. We can replace 1-x by x; that just reflects the shape through a mirror. Thus the integral is xn/n!. Evaluate at 0 and 1 to get 1/n!. The generalized octahedron in n dimensions consists of n variables such that the sum of their absolute values never exceeds 1. This is actually a bunch of simplexes placed around the origin. In fact we need 2n simplexes to make an octahedron. In 3 dimensions we place 8 simplexes around the origin, one for each octant. Each simplex presents one face of the octahedron. The volume of the generalized octahedron is 2n/n!. This approaches 0 as n approaches infinity. I just got a copy of Paul & Dominic’s Guide to Getting a Quant Job, which I hope everyone reads. I say not not for your benefit in getting a job, but for my benefit in saving time and annoyance when looking to fill a job. I’m posting here because it recommends this forum for practicing for brain teaser interview questions. That’s an excellent idea. But if that’s what you’re here to do, you should also know how your answer will be scored. So here’s my take on it, I invite other opinions and comments. I ask three types of brain teaswer type questions. (1) Really easy ones. This is the equivalent to moving the mouse around at random to see if the computer is frozen. An example might be, "Suppose I flip a fair coin 10 times and pay you $1 if the first flip is heads, $2 if the second flip is heads, up to $10 if the 10th flip is heads. What is your expected payout?"I assume that anyone applying for a quant job knows how to sum the numbers from 1 to 10 (or has memorized the answer) and can divide by 2. If not, I don’t want them anyway. I score the answers as follows: 10 - Listens carefully as the problem is posed, thinks for a minute, then answers $27.50 without embellishment. 8 - Same as above, except asks for unnecessary clarification, appears suspicious of a trap, needs to use ТВИМС-задачник. Демешев Борис. roah@yandex.ru 149 pen and paper or feels it necessary to explain the reasoning. 6 - Same as 8 or 10, but gets a wrong answer between $20 and $35, or a clearly unreasonable wrong answer but says it’s unreasonable. 4 - Same as 8 or 10, but gets a clearly unreasonable wrong answer and doesn’t mention it. 2 - Cannot understand the question, or pretends to misunderstand it to avoid giving a numerical answer. 0 - Throws out random phrases and numbers and watches my face hoping for some kind of hint, like Clever Hans, the horse that could do math. The point here is that good quants are confident of their ability to recognize and solve simple problems. Most applicants are not good quants. You can do well on exams without this skill, because problems are often set at an expected moderate level of difficulty, if it’s too easy or too hard you probably misunderstood it. Real work is not like that. If you’re too scared to give a simple answer to a simple question, you’re not going to work out. (2) Hard ones, like many of the ones you will find here. Here the scoring is: 10 - Listens carefully as the problem is posed, thinks for a minute, then says anything intelligent that indicates they understand why the problem is hard, and shows some confidence at being able to solve it. Right or wrong doesn’t matter. 8 - Answers immediately and correctly (I assume in this case they’ve heard it before). 6 - Says, "I’ve heard it before,"and answers correctly 4 - Says, "I’ve heard it before,"and answers incorrectly, or in a way that shows they don’t understand the answer, or can’t remember the answer 2 - Cannot understand the question, or pretends to misunderstand it to avoid giving an answer. 0 - Throws out random phrases and numbers and watches my face hoping for some kind of hint, like Clever Hans, the horse that could do math. There are people who expect to get the right answer, and are happy to be judged on their actual level of intelligence. There are others who know in their hearts they will never get the right answer, and/or hope to mislead you into thinking they are smarter than they are. Only the first group will get considered for the job. I judge almost entirely on demeanor. Good people are eager to get these questions, bad people dread them. Good people are proud when they get it right and honest but unembarassed when they get it wrong. Bad people make excuses and complaints either way. (3) Open-ended ones, like "Give me your personal subjective 50% confidence interval for the Gross Domestic Product of Finland in USD." 10 - Without prompting comes up with a reasonable approach like, "I know it’s a small country, but it’s not so tiny I’ve never heard of it, so I’ll guess 5 million population. It’s not known as particularly poor or rich for Northern Europe, so I’ll guess $10,000 per capita income. That gives me a point estimate of $50 billion. National income and GDP are close enough to each other for this approximation. For a 50% confidence level I’d go 2 to 20 million population and $5,000 to $20,000 per capita income, that gives me $10 billion to $400 billion. Considering everything, I think that’s too big a range, I’ll go with $25 billion to $150 billion. If I had to bet, I’d be equally happy to bet the true number is inside as outside that range." 8 - Same as above but needs prompting and either uses some clearly unreasonable numbers or makes numerous math errors. 6 - Thinks for a while and comes up with a reasonable range, but can’t discuss it intelligently. 4 - Cannot understand what a 50% confidence interval is. 2 - Keeps asking for more information, or explains how they would look up the answer. 0 - Throws out random phrases and numbers and watches my face hoping for some kind of hint, like Clever Hans, the horse that could do math. ТВИМС-задачник. Демешев Борис. roah@yandex.ru 150 Here I’m looking for ability to understand and follow instructions, some small amount of practical sense and knowledge of the world and ability to use quantitative reasoning. I retired this question when a candidate outsmarted me. I’d used it for about two years and gotten scores from 0 to 10. Then someone said "My interval is all dollar amounts that round to odd integer amounts."He turned a (3) question into a (2) one, and got the job. 20.27. Solution: may be a bad answer 20.28. Solution: (maybe computational) Has anyone tried to calculate this maximum expectation? I see that seat num 3 and num 23 gives you the highest expectation, however, the value I am getting is 11.38 Here is how I solve it. Let E(n) be the expected number of antisocial people you can seat in n seats. For example, E(0) = 0 E(1) = 1 E(2) = 1 E(3) = 1/3 * (1 + E(1)) + 1/3 * (1 + E(0)) + 1/3 * (1 + E(1)) = 1 + 2/3 * E(1) E(4) = 1/4 * (1 + E(2)) + 1/4 * (1 + E(1)) + 1/4 * (1 + E(1)) + 1/4 * (1 + E(2)) = 1 + 2/4 * (E(1) + E(2)) E(5) = 1 + 2/5 * (E(1) + E(2) + E(3)) and E(n) = 1 + 2/n * (E(1) + E(2) + ... + E(n-2)) n = 2, 3, ... The last expression can also be written as E(n) = E(n-2) + 2/n * (1 + E(n-3)) n = 3, 4, Now, let A(m) be the expected number of antisocial people you can seat in 25 seats provided that the first person takes the mth seat. Then, A(1) = 1 + E(23) A(2) = 1 + E(22) A(3) = E(1) + 1 + E(21) A(4) = E(2) + 1 + E(20) and so on I am getting A(3) = 11.38 It is interesting that A(m) is oscillating. 20.29. solution: 𝑛!𝑆 𝑛 = ∑︀(−1) 𝑘 · 𝑘!(𝑛 − 𝑘)! (𝑛 + 1)!𝑆 𝑛+1 = ∑︀(−1) 𝑘 · 𝑘!(𝑛 + 1 − 𝑘)! = 0 or: (𝑛 + 1)! = ∑︀(−1) 𝑘 · 𝑘!(𝑛 + 1 − 𝑘)! (𝑛 + 2)𝑛!𝑆 𝑛 − (𝑛 + 1)! = ... = (𝑛 + Можно решать через арифметику (задача про 𝑓(𝑎, 𝑏) заменить биномиальный коэффициент на интеграл sums 𝛿(𝑛) and 𝛿(𝑛) ≥ 10 then move one of the digits from the group with bigger sum to the other group. Hence 𝛿(𝑛) is bounded by 10. I claim that the average is 1 2 . It can be seen that this number is bigger than or equal to 1 2 , because if the sum of the digits of n is odd then 𝛿(𝑛) ≥ 1, and the probability of having an ТВИМС-задачник. Демешев Борис. roah@yandex.ru 151 odd digital sum is 1 2 . One can also see that the probability of a number to have at least 10 digits of 1, is 1 (Really easy to show). Now the average of 𝛿(𝑛) over all numbers will be equal to the average over the special numbers I mentioned (The ones having at least ten digits of 1) For each of these numbers like n, just remove the first ten digits of 1 to obtain 𝑚. Then we know that 𝛿(𝑚) < 10. Now using the extra 1 digits we have at hand we can distribute them among the two groups we have for 𝛿(𝑚) to decrease the difference between sums to either 0 or 1. If the sum of digits of 𝑚 is odd we can reduce 𝛿(𝑚) to 1 and in the other case to 0. One can again easily observe that the probability of 𝑚 having an odd digital sum is 1 2 . So for half of the numbers (with respect to probability) we have 𝛿(𝑛) = 0 and for another half (disjoint from the last half) we have 𝛿(𝑛) ≤ 1. So the average we are trying to find is less than or equal to 1 2 and hence my claim is proved. (wilmott) "вероятности"для натуральных чисел that 2 𝑛 |