# Tag: randomness

Entropy is the number of ways you can arrange the littlest parts of a system and still get the same big system (1). Alternatively, entropy technically is a mean to determine how many different ways you can rearrange the atoms of an object, and still have it look pretty much the same.

For example; a bag full of Lego Bricks has a lot of entropy because there are many equivalent states or configurations (disorder). The greater the randomness in a system the greater the entropy. But a house built with those very same bricks has low entropy because there are (relatively) fewer ways to build that house (order).

When building the little house the entropy of the bricks has decreased. But the total entropy of the Universe has actually increased because the kid who built the house disipated some heat into the air.… Read more...

A string of bits is random if and only if it is shorter than any computer program than can produce the string (Kolmogorov Randomness). This means that random strings can not be compressed.

But a scalable algorithm may produce an incomprensible sequence that mimics randonmnes for any length and period we want. But the fact that is incompresible does not mean its random: this is the case of any irrational number like pi, which is deterministic.

Randonmness talks about how likely is some future event to take place. That property was called ‘probability’ by Bernoulli in the 17th century.

Markov Chain Monte Carlo is a group of algorithms used to map out the posterior distribution by sampling from the posterior distribution. The reason we use this method instead of the quadratic approximation method is because when we encounter distributions that have multiple peaks, it is possible that the algorithm will converge to a local maxima, and not give us the true approximation of the posterior distribution. The Monte Carlo algorithms however, use the principles of randomness and chaos theory to solve problems that would otherwise be difficult, if not impossible, to solve analytically.

Let’s start this discussion with an analogy, which we can update as we traverse through the different types of algorithms.

## Consider running them jointly as one.

Why the statement “We can run two experiments concurrently and because of randomness experiment 1 will affect all the buckets in experiment 2 equally” is not always a correct statement. This is not a correct statement when experiment #2 has more of an effect on one branch of experiment #1 than the other branch of experiment #1.

Example #1

One is the general case where there is an interaction effect between the two experiments. Say experiment #1 is testing a button being red or blue. Say experiment #2 is testing if the text color for the button is black or red. When you run both experiments at the same time you essentially get 4 branches of the experiment: 1.

The previous article I wrote about randomness proved quite controversial. After all, random processes are used all the time to model things in science. How can I say randomness is not a scientific explanation?

Let me first make a distinction between a model and an explanation. A model shows us how some physical thing operates, but it does not explain the cause of the thing. An explanation, on the other hand, tries to explain the cause.

But surely if we can effectively model something with randomness, then randomness must also be part of the causal explanation for the thing? Well, not so fast.

Let’s look at how we model randomness with computers. Computers themselves are not random in the slightest. Computer code is entirely deterministic.

## Which Hypothesis Test should you use? Pearson or Spearman? T-Test or Z-Test? Chi Square? No problem with easy-ht.

One of the main difficulties that a new data scientist may encounter regards statistics basics. In particular, it may be difficult for a data scientist to understand which hypothesis test to use in a specific situation, such as when a Chi Square test can be used, or what is the difference between the Pearson Correlation Coefficient and the Spearman Rank Correlation.

For this reason, I have implemented a Python package, called easy-ht, which permits to perform some statistical tests, such as correlation, normality, randomness, means and so on, without caring about the specific test to be used.

“Extent” (2018) at DUST by Paul Michael Draper (August 9, 2021, 12:43 min)

Time stands still as two old friends attempt to grapple with a question that defines their very existence. If you could live forever, would you?

Review: Edward, the greatest inventor has invented many things, including his friend Alexander. But he comes to think everything is futile because he faces oblivion at death.

Meanwhile, android Alexander wants Edward to enable him to “cherish moments” and to be able to long for a “tomorrow that may never come” — which, in the context, means he wants mortality.

Alexander reflects, “I think about what my forever may be, It haunts me” and later “Pleasure and pain are no longer relevant when you remove the present danger of death.