Kilobyte is precisely 1000 bytes
50 points - today at 4:53 PM
SourceComments
* Yeah, I read the article. Regardless of the IEC's noble attempt, in all my years of working with people and computers I've never heard anyone actually pronounce MiB (or write it out in full) as "mebibyte".
The author doesnāt actually answer their question, unless I missed something?
They go on to make a few more observations, and say finally only that the current different definitions are sometimes confusing, to non experts.
I donāt see much of an argument here for changing anything. Some non experts experience minor confusion about two things that are different, did I miss something bigger in this?
Because Windows, and only Windows, shows it this way. It is official and documented: https://devblogs.microsoft.com/oldnewthing/20090611-00/?p=17...
> Explorer is just following existing practice. Everybody (to within experimental error) refers to 1024 bytes as a kilobyte, not a kibibyte. If Explorer were to switch to the term kibibyte, it would merely be showing users information in a form they cannot understand, and for what purpose? So you can feel superior because you know what that term means and other people donāt.
KB is 1024 bytes, and don't you dare try stealing those 24 bytes from me
I disagreed strongly - I think X-per-second should be decimal, to correspond to Hertz. But for quantity, binary seems better. (modern CS papers tend to use MiB, GiB etc. as abbreviations for the binary units)
Fun fact - for a long time consumer SSDs had roughly 7.37% over-provisioning, because that's what you get when you put X GB (binary) of raw flash into a box, and advertise it as X GB (decimal) of usable storage. (probably a bit less, as a few blocks of the X binary GB of flash would probably be DOA) With TLC, QLC, and SLC-mode caching in modern drives the numbers aren't as simple anymore, though.
This ambiguity is documented at least back to 1984, by IBM, the pre-eminent computer company of the time.
In 1972 IBM started selling the IBM 3333 magnetic disk drive. This product catalog [0] from 1979 shows them marketing the corresponding disks as "100 million bytes" or "200 million bytes" (3336 mdl 1 and 3336 mdl 11, respectively). By 1984, those same disks were marketed in the "IBM Input/Output Device Summary"[1] (which was intended for a customer audience) as "100MB" and "200MB"
0: (PDF page 281) "IBM 3330 DISK STORAGE" http://electronicsandbooks.com/edt/manual/Hardware/I/IBM%20w...
1: (PDF page 38, labeled page 2-7, Fig 2-4) http://electronicsandbooks.com/edt/manual/Hardware/I/IBM%20w...
Also, hats off to http://electronicsandbooks.com/ for keeping such incredible records available for the internet to browse.
-------
Edit: The below is wrong. Older experience has corrected me - there has always been ambiguity (perhaps bifurcated between CPU/OS and storage domains). "And that with such great confidence!", indeed.
-------
The article presents wishful thinking. The wish is for "kilobyte" to have one meaning. For the majority of its existence, it had only one meaning - 1024 bytes. Now it has an ambiguous meaning. People wish for an unambiguous term for 1000 bits, however that word does not exist. People also might wish that others use kibibyte any time they reference 1024 bytes, but that is also wishful thinking.
The author's wishful thinking is falsely presented as fact.
I think kilobyte was the wrong word to ever use for 1024 bytes, and I'd love to go back in time to tell computer scientists that they needed to invent a new prefix to mean "1,024" / "2^10" of something, which kilo- never meant before kilobit / kilobyte were invented. Kibi- is fine, the phonetics sound slightly silly to native English speakers, but the 'bi' indicates binary and I think that's reasonable.
I'm just not going to fool myself with wishful thinking. If, in arrogance or self-righteousness, one simply assumes that every time they see "kilobyte" it means 1,000 bytes - then they will make many, many failures. We will always have to take care to verify whether "kilobyte" means 1,000 or 1,024 bytes before implementing something which relies on that for correctness.
I gave some examples in my post https://blog.zorinaq.com/decimal-prefixes-are-more-common-th...
You can use `--si` for fake, 1000-byte kilobytes - trying it it seems weird that these are reported with a lowercase 'k' but 'M' and so on remain uppercase.
Agreed. For the naysayers out there, consider these problems:
* You have 1 "MB" of RAM on a 1 MHz system bus which can transfer 1 byte per clock cycle. How many seconds does it take to read the entire memory?
* You have 128 "GB" of RAM and you have an empty 128 GB SSD. Can you successfully hibernate the computer system by storing all of RAM on the SSD?
* My camera shoots 6000Ć4000 pixels = exactly 24 megapixels. If you assume RGB24 color (3 bytes per pixel), how many MB of RAM or disk space does it take to store one raw bitmap image matrix without headers?
The SI definitions are correct: kilo- always means a thousand, mega- always means a million, et cetera. The computer industry abused these definitions because 1000 is close to 1024, creating endless confusion. It is a idiotic act of self-harm when one "megahertz" of clock speed is not the same mega- as one "megabyte" of RAM. IEC 60027 prefixes are correct: there is no ambiguity when kibi- (Ki) is defined as 1024, and it can coexist beside kilo- meaning 1000.
The whole point of the metric system is to create universal units whose meanings don't change depending on context. Having kilo- be overloaded (like method overloading) to mean 1000 and 1024 violates this principle.
If you want to wade in the bad old world of context-dependent units, look no further than traditional measures. International mile or nautical mile? Pound avoirdupois or Troy pound? Pound-force or pound-mass? US gallon or UK gallon? US shoe size for children, women, or men? Short ton or long ton? Did you know that just a few centuries ago, every town had a different definition of a foot and pound, making trade needlessly complicated and inviting open scams and frauds?
It's the same reasonāfor pure marketing purposesāthat screens are measured diagonally.
Approximating metric prefixing with kibi, Mibi, Gibi... is confusing because it doesn't make sense semantically. There is nothing base-10-ish about it.
I propose some naming based on shift distance, derived from the latin iterativum. https://en.wikipedia.org/wiki/Latin_numerals#Adverbial_numer...
* 2^10, the kibibyte, is a deci (shifted) byte, or just a 'deci'
* 2^20, the mibibyte, is a vici (shifted) byte, or a 'vici'
* 2^30, the gibibyte, is a trici (shifted) byte, or a 'trici'
I mean, we really only need to think in bytes for memory addressing, right? The base doesn't matter much, if we were talking exabytes, does it?
It would be nice to have a different standard for decimal vs. binary kilobytes.
But if Don Knuth thinks that the "international standard" naming for binary kilobytes is dead on arrival, who am I to argue?
Why donāt kilobyte continue to mean 1024 and introduce kilodebyte to mean 1000. Byte, to me implies a binary number system, and if you want to introduce a new nomenclature to reduce confusion, give the new one a new name and let the older of more prevalent one in its domain keep the old oneā¦