Главная страница
Навигация по странице:

  • Here’s a remaining question. Can we make the condition hold for all primes and products of two distinct primes

  • Составитель Борис Демешев


    Скачать 1.58 Mb.
    НазваниеСоставитель Борис Демешев
    Дата18.10.2021
    Размер1.58 Mb.
    Формат файлаpdf
    Имя файлаnew_el.pdf
    ТипЗадача
    #249902
    страница21 из 22
    1   ...   14   15   16   17   18   19   20   21   22
    begins with 603245?
    Prime factor problem
    20.31.
    Search: 𝑆
    𝑛
    /𝑛
    problem, current state > 𝐸(𝑏𝑒𝑠𝑡) >
    Wald had a number of important results, including the famous theorem that the expected value of the sum of a random number of random variates is equal to the product of the expected value of a single variate times the expected number in the sum; as long as the stopping rule is independent of the value of the sum
    Like a lot of theorems, Wald’s Theorem is valuable when it doesn’t apply. That is, people often assume it is true, it’s a handy trick for solving certain kinds of problems. Wald gave rigorous conditions under which it is true. When you come across an application, it’s a good idea to check the three conditions.
    Here is a simple example. In the mindless children’s card game "War"two players split a deck of card between them. At each turn, both players reveal their top card, the player with the higher card takes both and puts them at the bottom of her deck. If the cards are the same there is a "war,"meaning each player deals three cards face down, then turns up the next card. The player with the higher card takes all ten cards. In case of a tie there is another war, with the winner getting 18 cards. And so on until one player runs out of cards.
    Question: what is the expected number of concealed Aces (the high card) that will change hands in the first play of the game? This is an important parameter for analyzing the game.
    You could figure out all the possible combinations, but that would take a while. Wald’s theorem tells us we can compute the expected number of wars on the first play, 16 * (17
    −2
    + 17
    −3
    + 17
    −4
    )
    , and multiply by the expected number of concealed Aces from the losing player per war, 3/13. The answer is 0.0144. By the way, you cannot have more than three wars. In the unlikely event (1/83,521) that the players tie four times in a row, both of them lose the game and no cards are exchanged.
    The three conditions of Wald’s theorem are:
    (1) The number of trials is a non-negative integer with finite expectation.
    (2) The outcome of the trials are i.i.d. with finite expectation.
    (3) The outcome of the trial is independent of whether or not it is included in the total.
    In the War example, condition 2 is violated. The probability that the first two cards match is 3/51 =
    1/17. If they do, the probability that the next two compared cards match is 2/50 * 1/49 + 48/50 * 3/49
    = 146/2,450 = 0.0596 instead of 0.0588.
    —-
    Version #1 (tboafo): A die is rolled once. If the outcome is 1, 2, or 3, one stops; otherwise (i.e., if it is 4,
    5, or 6) one rolls the die a second time. What is the total expected value? [A: 5.25]
    Version #2 (Wilbur): A die keeps being rolled until the outcome is 1, 2, or 3. What is the total expected value? [A: 7]
    Now, on to version #3:

    ТВИМС-задачник. Демешев Борис. roah@yandex.ru
    152
    One has an arbitrarily large number of dice at our disposal (this is just a conceptual convenience; the problem can easily be formulated with one single die). The first die is rolled. If the outcome is 1, 2, or 3,
    one stops; otherwise, if it is 4, 5, or 6, a corresponding number of dice are rolled. This procedure continues for every rolled dice whose outcome is 4, 5, or 6. Let n denote the n-th round of rolls. What is the total expected value at the end of the n-th round of rolls?
    Let’s clarify with a couple of examples:
    Example I:
    1st round of rolls (only one die is rolled): 3 => game ends.
    Example II:
    1st round of rolls (only one die is rolled): 4 => 4 dice will be rolled next 2nd round of rolls: 3, 5, 2, 6 =>
    1st and 3rd dice end their lives, 2nd and 4th dice will lead to 5 and 6 dice to be rolled next, respectively
    3rd round of rolls: 1, 2, 4, 4, 3; 5, 3, 6, 1, 6, 2 => ... and so forth ...
    solution:
    expected value for 1st round: 3.5
    expected number of dice for 2nd round: (4+5+6)/6 = 2.5
    expected value for 2nd round: 2.5 * 3.5
    expected number of dice for n-th round: 2.5
    (
    𝑛 − 1)
    expected value for n-th round: 2.5
    (
    𝑛 − 1) * 3.5
    total expected value at n-th round = (2.5
    𝑛
    − 1)/(1.5) * 3.5 = 7/3 * (5/2
    𝑛
    − 1)

    20.32.
    The key is that each pair of books is independent. Using the 0.4, 0.3, 0.2 and 0.1 probabilities: book
    1 spends 3/7 of the time below book 2, 2/6 of the time below book 3 and 1/5 of the time below book 4.
    So it has, on average, 202/210 books on top of it.
    If the probabilities are p1, p2, . . ., pn, book i’s average depth will be the sum for j = 1 to n (not = i)
    pj/(pi+pj).
    20.33.
    strategy - easy, Expectation - computational expensive (?)
    20.34.
    20.35.
    The solution is highly dependent on how one interprets the phrase "chord chosen at random".
    Solution 1: Assign a uniform probability distribution to the angles of intersection of the chord on the circumference – then, clearly P = 1/3
    Solution 2: Assign a uniform probability distribution to the center of the chord over the area of the circle
    – then one can show that P = 1/4
    Solution 3: Assign a uniform probability distribution to the linear distance between centers of the chord and circle midpoint, then p = 1/2
    Remarkably, in a 1973 paper The Well-Posed Problem E.T. Jaynes argued that this problem is well-posed after all if one requires certain physical symmetries of the probability distribution: rotational invariance,
    translational invariance and scale invariance. As Jaynes demonstrates, the 3rd solution is actually superior on these physical grounds.
    20.36.
    I agree w/silverside on (a) 2/pi

    ТВИМС-задачник. Демешев Борис. roah@yandex.ru
    153
    for (b, given a random direction theta in [0, pi] that we walked to get to P, the probability of reaching the river by walking in a random direction is Abs(pi - 2 theta ) / 2pi. Averaging from 0 to pi gives 1/4.
    20.37.
    20.38.
    20.39.
    20.40.
    I ask that people who post problems make some effort to be precise. I assume "random"binary sequence means each digit is 0 or 1 with probability 0.5, independent of all other digits. I assume subsequence means contiguous subsequence as specified by the subsequent post.
    A squence of length N has N - k + 1 subsequences of length k. The probability of two sequences of length k being identical, given the assumptions above, is 2
    −𝑘
    . So the expected number of identical subsequences of length k is (𝑁 − 𝑘 + 1)
    2
    * 2
    −𝑘
    For large N, this is approximately (𝑁
    2
    ) * (2
    −𝑘
    )
    . Using that approximation, the expected number of identical subsequences of length k or longer is (𝑁
    2
    ) * 2
    (1−𝑘)
    . This equals 1 if 2*ln(N)+(1 - k)*ln(2) = 0,
    or k = 2*ln(N)/ln(2) + 1.
    For large N and k near 2*ln(N)/ln(2) + 1, the probability of having an identical subsequence of length k+1, given that you have have an identical subsequence of length k, is near 0.75. 1/(1 - 0.75) = 3.
    This gives an approximation for the expected value of the longest subsequence to be 2*ln(N)/ln(2) + 4,
    for large N.
    For N = 3 there are 64 possible combinations. 2 of them, 000/111 and 111/000 have longest identical subsequence equal to zero. 20 of them have longest identical subsequence equal to one. 34 of them have longest identical subsequence equal to two. 8 of them, anything/the same thing, have longest identical subsequence equal to three. The expected value is (2*0 + 20*1 + 34*2 + 8*3)/64 = 112/64 as Merlin81
    said. That’s not close to the 7.17 asymptotic approximation for large N.
    variation:
    the first sequence is 00000...000 Вроде бы равносильно переносу начала координат и применению результата для регрессии без свободного члена. Должна остаться несмещенность.
    20.42.
    a) I believe the answer to the first question is yes. Let 𝑝
    𝑖
    be the 𝑖-th prime. For convenience, set
    𝑝
    0
    = 1
    . Let 𝑞
    𝑖
    be the product of the first 𝑖 primes. (In particular, 𝑞
    0
    = 1
    .) Assign the number 𝑞
    𝑖
    a probability of
    1
    𝑝
    𝑖

    1
    𝑝
    𝑖+1
    . Every other number is assigned a probability of zero. I believe this probability distribution satisfies your property.
    b) I believe the answer to the second question is no. We can’t even make the condition work for all squarefree numbers. Here is a sketch of the argument.
    Suppose the condition did hold for all squarefree numbers. We can then easily show that the events of being divisible by 2, 3, 5, etc., are mutually independent. In particular, the probability of being nondivisible by
    17, 19, 23, 29, 31, etc., is 0. (Here I am using the fact that the sum of inverse primes is infinite.) It follows that the probability of being 1 through 16 is 0. By changing the 17 to an arbitrarily large prime, we get that the probability of each integer is 0, which is a contradiction.

    Here’s a remaining question. Can we make the condition hold for all primes and products of two distinct primes?
    source: aops, t=187407 20.43.
    Let 𝜃 be the angle of the point’s initial vector. After traveling a distance 𝑟, the point has moved
    𝑟 · 𝑐𝑜𝑠(𝜃)
    horizontally and 𝑟 · 𝑠𝑖𝑛(𝜃) vertically, and thus has struck 𝑟 · (𝑠𝑖𝑛(𝜃) + 𝑐𝑜𝑠(𝜃)) + 𝑂(1) walls. Hence the average distance between walls will be 1/(𝑠𝑖𝑛(𝜃) + 𝑐𝑜𝑠(𝜃)). We now average this over all angles 𝜃:
    2/𝑝𝑖 · 𝑖𝑛𝑡
    𝜋/2
    𝜃=0
    (1/(𝑠𝑖𝑛(𝜃) + 𝑐𝑜𝑠(𝜃)))𝑑𝜃 = 2 ·

    2 · 𝑙𝑛(1 +

    2)/𝜋 ≈ 0.79

    ТВИМС-задачник. Демешев Борис. roah@yandex.ru
    154
    source: probability puzzles, archive
    20.44.
    ==> probability/flips/once.in.run.s <==
    References:
    John P. Robinson, Transition Count and Syndrome are Uncorrelated, IEEE Transactions on Information
    Theory, Jan 1988.
    First we define a function or enumerator P(n,k) as the number of length "n"sequences that generate
    "k"successes. For example,
    P(4,1)= 4 (HHTH, HTHH, TTHT, and THTT are 4 possible length 4 sequences).
    I derived two generating functions g(x) and h(x) in order to enumerate P(n,k), they are compactly represented by the following matrix polynomial.
    (︂
    𝑔(𝑥)
    ℎ(𝑥)
    )︂
    =
    (︂ 1 1 1 𝑥
    )︂
    (𝑛−3)
    (︂
    4 2 + 2𝑥
    )︂
    The above is expressed as matrix generating function. It can be shown that 𝑃 (𝑛, 𝑘) is the coefficient of the 𝑥
    𝑘
    in the polynomial (𝑔(𝑥) + ℎ(𝑥)).
    For example, if 𝑛 = 4 we get (𝑔(𝑥) + ℎ(𝑥)) from the matrix generating function as (10 + 4𝑥 + 2𝑥
    2
    )
    . Clearly,
    P(4,1) (coefficient of x) is 4 and 𝑃 (4, 2) = 2 ( There are two such sequences THTH, and HTHT).
    We can show that mean(k) = (n-2)/ 4 and 𝑠𝑑 =

    5𝑛 − 12/4
    We need to generate "n"samples. This can be done by using sequences of length (n+2). Then our new statistics would be mean = n/4
    𝑠𝑑 =

    5𝑛 − 2/4
    Similar approach can be followed for higher dimensional cases.
    20.45.
    ==> probability/lights.s <==
    Let E(m,n) be this number, and let (x)C(y) = x!/(y! (x-y)!). A model for this problem is the following nxm grid:
    where each + represents a traffic light. We can consider each traffic light to be a direction pointer, with an equal chance of pointing either east or south.
    IMHO, the best way to approach this problem is to ask: what is the probability that edge-light (x,y) will be the first red edge-light that the pedestrian encounters? This is easy to answer; since the only way to reach (x,y) is by going south x times and east y times, in any order, we see that there are (x+y)C(x)
    possible paths from (0,0) to (x,y). Since each of these has probability (1/2)
    (
    𝑥 + 𝑦 + 1)
    of occuring, we see that the the probability we are looking for is (1/2)
    (
    𝑥 + 𝑦 + 1) * (𝑥 + 𝑦)𝐶(𝑥)
    . Multiplying this by the expected number of red lights that will be encountered from that point, (n-k+1)/2, we see that
    𝐸(𝑚, 𝑛) =
    ∑︀
    𝑚−1
    𝑘=0
    (1/2)
    (
    𝑛 + 𝑘 + 1) * (𝑛 + 𝑘)𝐶(𝑛) * (𝑚 − 𝑘 + 1)/2 +
    ∑︀
    𝑛−1
    𝑘=0
    (1/2)
    (
    𝑚 + 𝑘 + 1) * (𝑚 + 𝑘)𝐶(𝑚) *
    (𝑛 − 𝑘 + 1)/2
    Are we done? No! Putting on our Captain Clever Cap, we define
    𝑓 (𝑚, 𝑛) =
    ∑︀
    𝑛−1
    𝑘=0
    (1/2)
    𝑘
    * (𝑚 + 𝑘)𝐶(𝑚) * 𝑘
    and 𝑔(𝑚, 𝑛) = ∑︀
    𝑛−1
    𝑘=0
    (1/2)
    𝑘
    * (𝑚 + 𝑘)𝐶(𝑚).
    Now, we know that
    𝑓 (𝑚, 𝑛)/2 =
    ∑︀
    𝑛
    𝑘=1
    (1/2)
    𝑘
    * (𝑚 + 𝑘 − 1)𝐶(𝑚) * (𝑘 − 1)
    and since f(m,n)/2 = f(m,n) - f(m,n)/2, we get that
    𝑓 (𝑚, 𝑛)/2 =
    ∑︀
    𝑛−1
    𝑘=0
    (1/2)
    𝑘
    * ((𝑚 + 𝑘)𝐶(𝑚) * 𝑘 − (𝑚 + 𝑘 − 1)𝐶(𝑚) * (𝑘 − 1))

    ТВИМС-задачник. Демешев Борис. roah@yandex.ru
    155
    −(1/2)
    𝑛
    * (𝑚 + 𝑛 − 1)𝐶(𝑚) * (𝑛 − 1)
    =
    ∑︀
    𝑛−2
    𝑘=0
    (1/2)
    (
    𝑘 + 1) * (𝑚 + 𝑘)𝐶(𝑚) * (𝑚 + 1)
    −(1/2)
    𝑛
    * (𝑚 + 𝑛 − 1)𝐶(𝑚) * (𝑛 − 1)
    = (𝑚 + 1)/2 * (𝑔(𝑚, 𝑛) − (1/2)
    (
    𝑛 − 1) * (𝑚 + 𝑛 − 1)𝐶(𝑚)) − (1/2)
    𝑛
    * (𝑚 + 𝑛 − 1)𝐶(𝑚) * (𝑛 − 1)
    therefore
    𝑓 (𝑚, 𝑛) = (𝑚 + 1) * 𝑔(𝑚, 𝑛) − (𝑛 + 𝑚) * (1/2)
    (
    𝑛 − 1) * (𝑚 + 𝑛 − 1)𝐶(𝑚)
    Now, 𝐸(𝑚, 𝑛) = (𝑛 + 1) * (1/2)
    (
    𝑚 + 2) * 𝑔(𝑚, 𝑛) − (1/2)
    (
    𝑚 + 2) * 𝑓 (𝑚, 𝑛) + (𝑚 + 1) * (1/2)
    (
    𝑛 + 2) *
    𝑔(𝑛, 𝑚) − (1/2)
    (
    𝑛 + 2) * 𝑓 (𝑛, 𝑚)
    = (𝑚+𝑛)*(1/2)
    (
    𝑛+𝑚+1)*(𝑚+𝑛)𝐶(𝑚)+(𝑚−𝑛)*(1/2)
    (
    𝑛+2)*𝑔(𝑛, 𝑚)+(𝑛−𝑚)*(1/2)
    (
    𝑚+2)*𝑔(𝑚, 𝑛)
    Setting m=n in this formula, we see that
    𝐸(𝑛, 𝑛) = 𝑛 * (1/2)
    (
    2𝑛) * (2𝑛)𝐶(𝑛)
    ,
    and applying Stirling’s theorem we get the beautiful asymptotic formula
    𝐸(𝑛, 𝑛) 𝑠𝑞𝑟𝑡(𝑛/𝑝𝑖)
    20.46.
    ==> probability/random.walk.s <== I can show the probability that Waldo returns to 0 is 1. Waldo’s wanderings map to an integer grid in the plane as follows. Let (𝑋
    𝑡
    , 𝑌
    𝑡
    )
    be the cumulative sums of the length 1 and length 2 steps respectively taken by Waldo through time t. By looking only at even t, we get the ordinary random walk in the plane, which returns to the origin (0,0) with probability 1. In fact,
    landing at (2n, n) for any n will land Waldo on top of his keys too. There’s no need to look at odd t.
    Similar considerations apply for step sizes of arbitrary (fixed) size.
    20.47.
    ==> probability/transitivity.s <== Yes. The actual values on the dice faces don’t matter, only their ordering. WLOG we may assume that no two faces of the same or different dice are equal. We can assume "generalised dice where the faces need not be equally probable. These can be approximated by dice with equi-probable faces by having enough faces and marking some of them the same.
    Take the case of three dice, called A, B, and C. Picture the different values on the faces of the A die.
    Suppose there are three:
    A A A
    The values on the B die must lie in between those of the A die:
    B A B A B A B
    With three different A values, we need only four different B values.
    Similarly, the C values must lie in between these:
    C B C A C B C A C B C A C B C
    Assume we want A to beat B, B to beat C, and C to beat A. Then the above scheme for the ordering of values can be simplified to:
    B C A B C A B C A B C
    since for example, the first C in the previous arrangement can be moved to the second with the effect that the probability that B beats C is increased, and the probabilities that C beats A or A beats B are unchanged. Similarly for the other omitted faces.
    In general we obtain for n dice A...Z the arrangement
    B ... Z A B ... Z ...... A B ... Z
    where there are k complete cycles of B..ZA followed by B...Z. k must be at least 1.
    CONJECTURE: The optimum can be obtained for k=1.
    So the arrangement of face values is B ... Z A B ... Z. For three dice it is BCABC. Thus one die has just one face, all the other dice have two (with in general different probabilities).
    CONJECTURE: At the optimum, the probabilities that each die beats the next can be equal.
    Now put probabilities into the BCABC arrangement:
    B C A B C x y 1 x’ y’

    ТВИМС-задачник. Демешев Борис. roah@yandex.ru
    156
    Clearly x+x’ = y+y’ = 1.
    Prob. that A beats B = x’ B beats C = x + x’y’ C beats A = y
    Therefore x’ = y = x + x’y’
    Solving for these gives x = y’ = 1-y, x’ = y = (-1 + sqrt(5))/2 = prob. of each die beating the next =
    0.618...
    For four dice one obtains the probabilities:
    B C D A B C D x y z 1 x’ y’ z’
    A beats B: x’ B beats C: x + x’y’ C beats D: y + y’z’ D beats A: z
    CONJECTURE: for any number of dice, at the optimum, the sequence of probabilities abc...z1a’b’c...z’ is palindromic.
    We thus have the equalities:
    x+x’ = 1 y+y’ = 1 z+z’ = 1 x’ = z = x + x’y’ = x + x’y’ y = y’ (hence both = 1/2)
    Solving this gives x = 1/3, z = 2/3 = prob. of each die beating the next. Since all the numbers are rational,
    the limit is attainable with finitely many equiprobable faces. E.g. A has one face, marked 0. C has two faces, marked 2 and -2. B has three faces, marked 3, -1, -1. D has three faces, marked 1, 1, -3. Or all four dice can be given six faces, marked with numbers in the range 0 to 6.
    Finding the solution for 5, 6, or n dice is left as an exercise.
    Source: Richard Kennaway, jrksys.uea.ac.uk
    Martin Gardner (of course!) wrote about notransitive dice, see the Oct ’74 issue of Scientific American, or his book "Wheels, Life and Other Mathematical Amusements ISBN 0-7167-1588-0 or ISBN 0-7167-1589-9
    (paperback).
    In the book, Gardner cites Bradley Efron of Stanford U. as stating that the maximum number for three dice is approx .618, requiring dice with more than six sides. He also mentions that .75 is the limit approached as the number of dice increases. The book shows three sets of 6-sided dice, where each set has 2/3 as the advantage нет) Пусть истинные веса слитков равны 𝑥, 𝑦 и Назовем оценку буквой ˆ𝑥

    ТВИМС-задачник. Демешев Борис. roah@yandex.ru
    157
    ˆ
    𝑥 = 𝑎𝑋 + 𝑏𝑌 + 𝑐𝑍
    Несмещенность: 𝐸(ˆ𝑥) = 𝑎𝐸(𝑋) + 𝑏𝐸(𝑌 ) + 𝑐𝐸(𝑍) = 𝑎𝑥 + 𝑏𝑦 + 𝑐(𝑥 + 𝑦) = 𝑥
    𝑎 + 𝑐 = 1
    , 𝑏 + 𝑐 = 0
    ˆ
    𝑥 = (1 − 𝑐)𝑋 + (−𝑐)𝑌 + Эффективность 𝑎𝑟(ˆ
    𝑥) = ((1 − 𝑐)
    2
    + 𝑐
    2
    + 𝑐
    2
    ) · 𝜎
    2
    = (3𝑐
    2
    − 2𝑐 + Чтобы минимизировать дисперсию нужно выбрать 𝑐 = Те. ˆ𝑥 =
    2 3
    𝑋 −
    1 3
    𝑌 +
    1 б) 𝑉 𝑎𝑟(ˆ𝑥) =
    2 3
    𝜎
    2
    𝑉 𝑎𝑟
    (︀
    𝑋
    1
    +𝑋
    2 2
    )︀ =
    1 Усреднение двух взвешиваний первого слитка лучше.
    22.11.
    22.12.
    22.13.
    22.14.
    22.15.
    Медиана равна 𝑚 =
    𝑘

    2
    𝑚
    𝑀 𝐿
    =
    max{𝑋
    1
    ,...,𝑋
    𝑛
    }

    1   ...   14   15   16   17   18   19   20   21   22


    написать администратору сайта