Create account

replied 1154d
I am afraid this kind of thinking is not at all new, and also not feasible. Say I toss a fair coin 100 times, and I want to store the resulting head/tail values in order (as bits).
replied 1154d
There simply is no way around the fact that I would need 100 bits minimum to account for the entropy in the data. If instead there was a common pattern (not fair coin) compression
replied 1154d
would in theory be possible. For example, if we were guaranteed that the coin only would be heads 3 times, we could just store the index of the 3 heads.
replied 1154d
Compression does not decrease entropy, neither does your sqrt idea (do the math and see!), neither does any other such clever trick.
replied 1154d
Now you may say, I don't know all algorithms, I cannot be sure that AES for example does not remove entropy, I certainly haven't done the math on all of them, and I gave no proof.
replied 1154d
But think of it this way, take the whole thing to the extreme. Suppose I use only 4 applications: MacOs, OpenOffice, Brave browser and Gimp. Should I be able then to compress these so
replied 1154d
much that I only need 2 bits to save their entire code? MacOs=00, OpenOffice=01, Brave=10, Gimp=11. And when there is a new version of macOs that I want, what then? Should I be able
replied 1154d
to compress the 4 apps AGAIN with the same codes as before, so now suddenly two versions of MacOS can be rebuilt only from the bitpattern 00? Does that make logical sense?
replied 1154d
I could give a more pure logical proof if you wanted, but I will not do that unless you really have use for it. Btw, someone else might proved this already.
replied 1153d
I understand the basic criticism here. Entropy is something that you cant just make smaller, according to theory. Yeah, you could never compress stuff down to like a single letter
replied 1153d
Another thing. Isnt it weird how if you flip a coin 100 times and it for example does 50 heads in a row then 50 tails in a row that we could totally represent this in way less than 100
replied 1153d
If it was 100 heads, and it always was, sure, you could save that in zero bits. If it was often 100 heads, it could be 1 bit for this special case, and a cost of one bit for all other
replied 1153d
cases. Like, the first bit is always 0 if it's not 100 heads (even this is a bit oversimplified)
replied 1153d
So you would need 1 bit if 100 heads, 101 bits if not.
replied 1153d
| bits? Does entropy have a hard objective definition here, or is it subjective?

Here is another showerthought. Different compression idea.

Every possible sequence of every possible
replied 1153d
It is definitely objective in my example. Think of it this way, instead of 100 coinflips, do 2, so in binary: 00,01,10,11. And try to map the info into 1 bit: 0 or 1.
replied 1153d
You can map 0 to 00 and 1 to 01, but then what do you map to 10 and 11? If you know that 10 and 11 never occurs, sure, you have compressed the data, but you do not.
replied 1153d
This is proof that you cannot reduce entropy, only change the dataformat (and often add some entropy). Even a compression algorithm will INCREASE entropy of a totally random dataset.
replied 1153d
About compression: For a given special dataset it may indeed decrease size, but if you use it over and over for different random datasets, you will spend more space than uncompressed.
replied 1153d
That is, it is "proof" (not very rigorous and mathematically clear) that you cannot reduce entropy by a whole (information-)bit. It could easily be generalized to be more fine grained.
replied 1153d
| number lives on irrational and transcendental numbers such as pi, right? (Nevermind the unfeasiblity, we just care about whether or not its logically possible). Could we not identify
replied 1153d
| a string of data represented in numbers as an index position of pi? For example, '14159265' could be p[1:8]. Is that not potential for insane bit reduction, theoretically? (If the
replied 1153d
| numbers get too big, no biggie, numbers can also be expressed as equations or in scientific notation).

I think all we gotta do to prove it is to do it once, and show results we
replied 1153d
So you use some Pi magic to compress, and when it doesn't work you use some more magic. I am sorry, it will not help you, "the devil is in the details" here
replied 1153d
| otherwise wouldn't be able to yield, correct?

I don't believe in a free lunch, but my mind is open.
replied 1153d
People have bought me free lunch lots of times lol.
replied 1153d
Ah, but if they bought the lunch, was it REALLY free? Huh? The plot thickens.
replied 1153d
| obviously. But what if there was a theoretical floor, based on the laws of mathematics? Im not arguing there can be, but what if?

I think it all comes down to a single premise. If
replied 1153d
| taking the square root of a Very Large number (1000 characters) can massively decrease the number of characters, then isnt that by itself a compression? All you gotta do is not add
replied 1153d
My advice for sqrt, DO THE MATH. It is really not hard, and it will help you understand the issues involved. DO NOT try to add AES or something into the mix, do ONE thing first.
replied 1153d
| more characters with notation than what you reduce with math. Is there some reason you could never be on the positive end here?

My idea with using AES255 encryption is simply that
replied 1153d
| doing so gives you a totally different set of characters with different entropy, i think. So you could compress it differently.

Logically speaking, i dont think a singularity is
replied 1153d
| realistic, no, but what if the shrinking process involved with square-rooting results in more compressioj than not? If the compression even had a mere 51% chance of being superior,
replied 1153d
| then couldnt we just do it over and over again to get a better compression?

I think it all hinges on the premise that math allows us to bidirectionally and deterministically shorten
replied 1153d
| a character count. Is this assumption itself incorrect?