Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In the strict sense of the word, TRUE randomness means "continuing to positively maintain the ongoing assertion of equivalent probability". Of course, as was mentioned, this very strict definition of randomness is inherently deterministic.

Read the Wikipedia article about randomness. Then check where it says "fallacies". Dice have no memory. The next throw of the dice doesn't depend in any way on the previous results.
 
In the strict sense of the word, TRUE randomness means "continuing to positively maintain the ongoing assertion of equivalent probability". Of course, as was mentioned, this very strict definition of randomness is inherently deterministic. Furthermore, there is no real world example that I can think to cite because the real world is not like true randomness even though there are a few cases which closely approximate it. Most people refer to randomness loosely in an inexact fashion which doesn't exactly help the matter of what they are trying to debate. A pseudorandom number generator never produces an unpredictable result, by the way. A pseudorandom number generator only produces results that when viewed aggregately can be said to be random. For instance, over binary range, 0101 and 1010 are both truly random sequences, whereas 0011 and 1100 are only aggregately random. The truly random samples "continue to positively maintain the ongoing assertion of equivalent probability" whereas the aggregately random samples only succeed in achieving quantitative parity with "the ongoing assertion of equivalent probability" after n selections.

Most of that doesn't even make sense.

I see no way that 0101 and 1010 can be said to be random, if they are part of a simple alternating sequence. They would both then be completely predictable (i.e. non-random).

You also seem to be confused or mistaken about the difference between conditional probability and independent probability. A fairly shuffled deck of cards naturally exhibits the former, while a fair die roll or coin flip naturally exhibits the latter.

A deck of cards "has memory" or exhibits conditional probability because cards are shown and then removed from the deck. A shown card can never appear again. The unshown remainder of the deck then has 0 probability of producing the card that was just shown, and 1/(N-1) probability of the next card being a particular one of the unshown cards, where N is the number of cards before one was shown.

For example, suppose a 52-card deck is shuffled and one card is shown. It's the 2 of hearts. The probability of 2 of hearts appearing as the next card is 0. The probability of any specific other card appearing next is 1/51. Eventually there are two cards remaining, and you know exactly which two because you already know every other card that was shown. What you don't know is the order, or which card of the two will appear next. Suppose the last two cards are 3 of clubs and 5 of spades. At this point the probability of 3 of clubs being shown next are 1/2. The same probability applies to 5 of spades. The probability of any other card appearing is 0. Once the next card is shown, you can then predict 100% what the last card is. If that's your definition of deterministic, you need to take a basic statistics class.

A fair die or coin flip has no memory: each roll or flip is independent of what came before.

A random stream of bits is simply a stream where you can't predict with better than 50% probability what the next bit will be, even if you know the entire history of prior bits. Knowing the entire history tells you nothing about the bits not yet seen. This is independent probability. Compare that to the conditional probability exhibited by the shuffled deck of cards, where each card shown (i.e. the history of prior cards) DOES affect the probability of the next card to appear.

If the cards were infinite, or more precisely an infinite number of fairly shuffled fair decks, then every card is equiprobable at every point, even if you know the entire history of dealt cards. That's because there is an unending supply of every possible card, so knowing the history doesn't tell you anything about the cards not yet seen. You can't even do card-counting at levels better than chance.
 
Last edited:
gnasher729 said:
Read the Wikipedia article about randomness. Then check where it says "fallacies". Dice have no memory. The next throw of the dice doesn't depend in any way on the previous results.
Where did I claim that dice have memory? You might want to check the article on "straw man" before suggesting for me to look at another article on your appeal to Wikipedia authority over the definition of randomness.

chown33 said:
Most of that doesn't even make sense.

I see no way that 0101 and 1010 can be said to be random, if they are part of a simple alternating sequence. They would both then be completely predictable (i.e. non-random).
You're missing the point. We shouldn't be asking if a sequence is random or not random. We should be asking how random a given sequence is and what purpose the extent of its randomness serves.

chown33 said:
You also seem to be confused or mistaken about the difference between conditional probability and independent probability. A fairly shuffled deck of cards naturally exhibits the former, while a fair die roll or coin flip naturally exhibits the latter.
No, I'm not confused. I'm also not confused about the difference between FSMs and PDAs. I can prove my claims even though some go against the grain. Can you say the same?

chown33 said:
A deck of cards "has memory" or exhibits conditional probability because cards are shown and then removed from the deck. A shown card can never appear again. The unshown remainder of the deck then has 0 probability of producing the card that was just shown, and 1/(N-1) probability of the next card being a particular one of the unshown cards, where N is the number of cards before one was shown.
Yes, thank you for reiterating something I already know.

chown33 said:
For example, suppose a 52-card deck is shuffled and one card is shown. It's the 2 of hearts. The probability of 2 of hearts appearing as the next card is 0. The probability of any specific other card appearing next is 1/51. Eventually there are two cards remaining, and you know exactly which two because you already know every other card that was shown. What you don't know is the order, or which card of the two will appear next. Suppose the last two cards are 3 of clubs and 5 of spades. At this point the probability of 3 of clubs being shown next are 1/2. The same probability applies to 5 of spades. The probability of any other card appearing is 0. Once the next card is shown, you can then predict 100% what the last card is. If that's your definition of deterministic, you need to take a basic statistics class.
As I said, I understand the difference between independent and dependent experiments and don't need additional classes to understand it better.

chown33 said:
A fair die or coin flip has no memory: each roll or flip is independent of what came before.
Yes, but then we are arguing two separate things and you are treating them as if they are one in the same. I know there's a fallacy for that...

chown33 said:
A random stream of bits is simply a stream where you can't predict with better than 50% probability what the next bit will be, even if you know the entire history of prior bits. Knowing the entire history tells you nothing about the bits not yet seen. This is independent probability. Compare that to the conditional probability exhibited by the shuffled deck of cards, where each card shown (i.e. the history of prior cards) DOES affect the probability of the next card to appear.
Not really. This is just another disconnect between theory and practice. Perhaps it is not random at all. Perhaps it looks random because the observer does not have a method to decipher its true meaning or hasn't figured out how to predict it. We need a meaningful definition for randomness or there is no point to it and I am in the right to insist that claims are verifiable even when they are only considered theoretical claims. So, what we should be doing is considering the qualities of randomness present in a given sequence rather than making all-or-nothing proclamations about randomness.

chown33 said:
If the cards were infinite, or more precisely an infinite number of fairly shuffled fair decks, then every card is equiprobable at every point, even if you know the entire history of dealt cards. That's because there is an unending supply of every possible card, so knowing the history doesn't tell you anything about the cards not yet seen. You can't even do card-counting at levels better than chance.
Then again, you are applying conditions to that which you then claim I shouldn't be able to verify because it is inconvenient to your definition of randomness. The claim has to be verifiable for it to be meaningful. Circular definitions aren't verifiable. Neither is telling me an alleged infinite deck you can't possibly verify is in a particular state a valid or meaningful claim. Assumptions have to be empirically verifiable or all of logic is just X claiming whatever X wants to claim and demanding that we conform to X's expectations by allowing claims to stand on their own unverifiable weight.
 
Where did I claim that dice have memory? You might want to check the article on "straw man" before suggesting for me to look at another article on your appeal to Wikipedia authority over the definition of randomness.

You said, not in these words, that a truly random sequence has a memory.

To be honest, I can't be bothered with you anymore. Your opinion what a random sequence is is just so far off the chart, it is not worth spending any more of my time.

Goodbye.
 
Mactrillionaire please provide peer-reviewed citations for your definitions of true randomness please.

For the OPs Monte Carlo simulation, the kind of theoretical "true random" sequences you seem to be advocating would not work very well as the MC is trying to approximate the behavior of a practical system, not a theoretical one.

EDIT: As a hardware guy, this: http://www.robertnz.net/true_rng.html is what I think of as a true random number generator.

http://www.robertnz.net/true_rng.html

A hardware (true) random number generator is a piece of electronics that plugs into a computer and produces genuine random numbers as opposed to the pseudo-random numbers that are produced by a computer program such as newran. The usual method is to amplify noise generated by a resistor (Johnson noise) or a semi-conductor diode and feed this to a comparator or Schmitt trigger.

If that gave you either 0101010101... or 101010101010... life would be extremely boring.

B
 
Last edited:
gnasher729 said:
You said, not in these words, that a truly random sequence has a memory.

To be honest, I can't be bothered with you anymore. Your opinion what a random sequence is is just so far off the chart, it is not worth spending any more of my time.

Goodbye.
Fine, suit yourself. As for my claims, I won't allow them to be characterized as anything other than what they are. If others wish to specifically debate what I am debating, they are free to do so. To mischaracterize my claims as if to make me appear ignorant on a specific matter which I am well versed in is not tolerable, however. I understand mathematics and computer science. I also understand the practical limitations of certain mathematical definitions and why I can't be bothered to care beyond their practical application in the world.

balamw said:
If that gave you either 0101010101... or 101010101010... life would be extremely boring.

B
Yes, it would be very boring, but I can construct a meaningful FSM for that definition of random. That's why I am saying we should NOT be focused on achieving a theoretically sufficient randomness, but only a practical randomness for a given application.

As far as trying to define random as something else, I am asking why many do not see the circular aspects of some commonly accepted mathematical definitions and why we should care about a theoretical definition other than for theoretical investigations if it has no practical application.

The thing that is bothersome to me about all of this is the inclination to generalize about matters which should be specifically investigated. I guess I don't think of things in terms of "random" or "not random", but I do think of things as dependent/independent random events and do not see a conflict with allowing differing definitions because there is no general definition of randomness that is meaningful unless it is respective to some other claim we are either told to accept or asked to evaluate. So, really I want to insist on a meaningful definition of randomness that is also logically consistent. Let's call that concept verifiable randomness.
 
Last edited:
Yes, it would be very boring, but I can construct a meaningful FSM for that definition of random.

Glenn Close in Fatal Attraction comes to mind, on-off-on-off-...

I'm sorry but random and finite state machines don't overlap well in my world which is probably the source of the disconnect.

Chaos is the natural order in my world, where small fluctuations in initial conditions in a simple system of coupled variables can often lead to wildly different results even in a purely deterministic system. E.g. Two coupled pendulums instead of one. it becomes much harder to define a state machine if seemingly imperceptible changes to inputs can lead to very different outcomes.

B
 
Glenn Close in Fatal Attraction comes to mind, on-off-on-off-...

I'm sorry but random and finite state machines don't overlap well in my world which is probably the source of the disconnect.

Chaos is the natural order in my world, where small fluctuations in initial conditions in a simple system of coupled variables can often lead to wildly different results even in a purely deterministic system. E.g. Two coupled pendulums instead of one. it becomes much harder to define a state machine if seemingly imperceptible changes to inputs can lead to very different outcomes.

B
I understand what you are saying, but my position is that we are only responsible for what we can influence and for interacting with that which we think exists (i.e., if there is an actual causal relationship that is beyond our detection, we are not responsible for it because it is impossible to legitimately obligate X beyond X's actual ability to have influenced a given situation). An infinite deck of cards does not exist because every deck of cards which exists can be counted by someone, somewhere assuming that someone also exists, knows where the decks of cards are and has the ability to count all of them. In short, since X concept doesn't exist in reality, I am entitled to say it doesn't have a practical application to me and I don't care about its truth value.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.