Could you compress text files by mapping a word to how commonly it is used and translating it with an application?

It’s a bit of a weird shower thought but basically I was wondering hypothetical if it would be possible to take data from a social media site like Reddit and map the most commonly used words starting at 1 and use a separate application to translate it back and forth.

So if the word “because” was number 100 it would store the value with three characters instead of seven.

There could also be additions for suffixes so “gardening” could be 5000+1 or a word like “hoped” could be 2000-2 because the “e” is already present.

Would this result in any kind of space savings if you were using larger amounts of text like a book series?

Snarwin,

This technique is called Huffman coding, and it's used in a number of compression algorithms.

CorrodedCranium,
@CorrodedCranium@leminal.space avatar

Huh that was an interesting read. I’m not sure I understood the entirety of it but it sounds like it would be a lot more efficient than what I was thinking

morhp,

Yes, it’s very efficient and the core of what complession formats like .zip do.

The main difference to your idea is that computers count in binary like 0, 1, 10, 11, 100, 101 and so on and that you don’t want to assign these very low codes words directly. Say you’d have assigned 1 the most common word, then that will would be encoded very short, but you’d sort of take the one away from all the other codes, as you don’t know if 11 is twice the most common word or once the 11th (3rd in decimal) common word.

Huffman essentially computes the most optimal word assignments mathematically.

The main other difference between your suggestion and most compression algorithms is that you wouldn’t use a huge dictionary in real time and loading it and looking into it would be very slow. Most compression algorithms have a rather small dictionary buildin and/or they build one on the fly looking at the data that they want to compress.

Bizarroland,
@Bizarroland@kbin.social avatar

There's a website, I can't remember the link, but any text you search can be found to have already been written on one of its pages.

It used an algorithm to generate every possible page of text in the English language, including gobbledygook and random keyboard smashing.

You can then share a link to a full page of text that is far smaller than the page itself, given infinite computational resources, it would be possible to parse any program or any bit of software into its text equivalent and then generate the URL that attaches to this algorithm for that entire page reducing a thousand characters to 16.

It would then be possible to make a page consisting of these 16 digit characters and then compress that page to another page, turning 70,000 words of text or so into 16 digits.

Once you had a page full of those, you could compress them again by finding a page that contains a link to the links of all of the pages.

Given that this is perfect compression you could literally reduce any file to the URL of the page that you need to use (and some information about how many pages deep you have to go and any needed information on the final file structure and format at the end of the reconstruction) to regenerate the page that you need to use to regenerate the page that you need to use to regenerate the page and so on until you have fully reconstructed the file.

And, given that these pages that can be summoned by a URL and do not actually exist until they are generated by the algorithm, it is entirely feasible that the algorithm itself could be stored on your local computer, meaning that with a somewhat complicated but still reasonable system, you could send a file of any size to any person in a text.

The problem is the computational resources needed to utilize this algorithm. It would have to crunch out the numbers to rebuild every single file that is sent its way and even on a fairly fast system that's going to take some time.

However, if anyone wants to take this idea and run with it I'm pretty sure the world would appreciate your efforts. Since we have the ability to create a page of text using an algorithm then we probably should use that and then once it becomes commonplace I'm sure people will build chiplet accelerators for the process.

number6, (edited )

There’s a website, I can’t remember the link, but any text you search can be found to have already been written on one of its pages.

It’s The Library of Babel

You can type up to 3200 characters in lower case. With a short sentence though, the “title” and page number may be longer than your original text!

Edit: Fixed url title

Eggyhead,
@Eggyhead@artemis.camp avatar

This is kind of blowing my mind.

skulblaka,
@skulblaka@kbin.social avatar

It's the Library of Babel. Sorry, pedantic, I know, but the Tower of Babel is something different. If people go searching for that they're going to find a lot of discussion around the bible story and no links to the text generator.

number6,

Edited/Fixed.

Bizarroland,
@Bizarroland@kbin.social avatar
Kerfuffle,

it would be possible to parse any program or any bit of software into its text equivalent and then generate the URL that attaches to this algorithm for that entire page reducing a thousand characters to 16.

This can’t work. Let’s use a simpler example, instead of 16 characters for the link let’s say it’s a single digit and let’s say the content of the “page” is 4 digits. One digit has 10 possible values: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. 4 digits have 10,000 possible combinations. With only one digit to index into the 10,000 possible combinations, you can point to only 10 of them.

It’s the same thing for pages of text. If you have a 16 character link and the content you’re trying to index with it is more than 16 characters then you can only point to some of the possibilities in the larger set.

Bizarroland,
@Bizarroland@kbin.social avatar

Then how is it that I was able to link to 800 words with 5 characters, (stripping aside the static portion of the links)?

Kerfuffle,

First just think about the logic of what I said before: if there are finite number of combinations in the link, how can you possibly link to a larger number of items? It’s just logically impossible.

Then how is it that I was able to link to 800 words with 5 characters, (stripping aside the static portion of the links)?

The fact that you were able to link to 800 words doesn’t really mean anything. somesite.com/a could point to a file that was gigabytes. This doesn’t meant the file got compressed to a. Right?

There also might be less combinations for that site than it appears. For an 800 word chunk of grammatical English text, there are a lot less combinations than the equivalent length in arbitrary characters. Instead of representing each character in a word, it could just use an id like dog=1, antidisestablishmentarianism=2 and so on. Even using tricks like that though, it’s pretty likely you’re only able to link to a subset of all the possible combinations.

Regarding compression in general, it’s a rule that you can’t compress something independent of its content. If you could do that, even if the compression only reduced the file by the tiniest fraction you could just repeatedly apply the algorithm until you end up where the problem I described is obvious. If you could compress any large file down to a single byte, then that single byte can only represent 256 distinct values. However there are more than 256 distinct files that can exist, so clearly something went wrong. This rule is kind of like breaking the speed of light or perpetual motion: if you get an answer that says you have perpetual motion or FTL travel then you automatically know you did something wrong. Same thing with being able to compress without regard to the content.

Antimutt, (edited )
@Antimutt@lemmy.dbzer0.com avatar

The best compression algorithms are good at this. Look at the size of this Reddit download and compare when you uncompress it.

Rexios,

That’s literally how compression works

CorrodedCranium,
@CorrodedCranium@leminal.space avatar

I figured that’s roughly how JPEGs work but I wasn’t sure how exactly text compression would work. Do you have any recommendations for videos on data compression?

meta_synth,

Jpeg compression is much more complex than what you described. I recommend this video as a primer: youtu.be/0me3guauqOU

kozel,

To get the data, search “frequency list (of English)”.

Sethayy,

Computers are quick enough nowadays its better to just count how many of each word in a block, then use it as indices. Saves from having a dictionary the size of the English language for decompression

youngalfred,

That’s pretty much what a tokenizer does for Large Language Models like Chat-GPT. You can see how it works here: platform.openai.com/tokenizer

Type in the word ‘Antidisestablishmentarianism’ and you can see it becomes 5 tokens instead of 28 characters.

SpaceNoodle,

That’s kind of how file compression works - by replacing patterns with shorter symbolic representations!

RvTV95XBeo,

If I remember correctly, you can even pre-build dictionaries for some compression algorithms to speed up the process if you’re working with compressing lots of similar data.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines