litchralee
@litchralee@sh.itjust.works
- Comment on Is there a practical reason data centers have to sprawl outward instead of upward? 2 days ago:
In the past, we did have a need for purpose-built skyscrapers meant to house dense racks of electronic machines, but it wasn’t for data centers. No, it was for telephone equipment. See the AT&T Long Lines building in NYC, a windowless monolith of a structure on Lower Manhattan. It stands at 170 meters (550 ft).
This NYC example shows that it’s entirely possible for telephone equipment to build up, and was very necessary considering the cost of real estate in that city. But if we look at the difference between a telephone exchange and a data center, we quickly realize why the latter can’t practically achieve skyscraper heights.
Data centers consume enormous amounts of electric power, and this produces a near-equivalent amount of heat. The chiller units for a data center are themselves estimated to consume something around a quarter of the site’s power consumption, to dissipate the heat energy of the computing equipment. For a data center that’s a few stories tall, the heat density per land area is enough that a roof-top chiller can cool it. But if the data center grows taller, it has a lower ratio of rooftop to interior volume.
This is not unlike the ratio of surface area to interior volume, which is a limiting factor for how large (or small) animals can be, before they overheat themselves. So even if we could mount chiller units up the sides of a building – which we can’t, because heat from the lower unit would affect an upper unit – we still have this problem of too much heat in a limited land area.
- Comment on Why do languages sometimes have letters which don't have consistent pronunciations? 4 days ago:
The French certainly benefitted from the earlier Jesuit work, although the French did do their own attempts at “westernizing” parts of the language. I understand that today in Vietnam, the main train station in Hanoi is called “Ga Hà Nộ”, where “ga” comes from the French “gare”, meaning train station (eg Gare du Nord in Paris). This kinda makes sense since the French would have been around when railways were introduced in the 19th Century.
Another example is what is referred to in English as the “Gulf of Tonkin incident”, referring to the waters off the coast of north Vietnam. Here, Tonkin comes from the French transliteration of Đông Kinh (東京), which literally means “eastern capital”.
So far as I’m aware, English nor French don’t use the name Tonkin (it’s very colonialism-coded), and modern Vietnamese calls those waters by a different name anyway. There’s also another problem: that name is already in-use by something else, being the Tokyo metropolis in Japan.
In Japanese, Tokyo is written as 東京 (eastern capital) in reference to it being east of the cultural and historical seat of the Japanese Emperor in Kyoto (京都, meaning “capital metropolis”). Although most Vietnamese speakers would just say “Tokyo” to refer to the city in Japan, if someone did say “Đông Kinh”, people are more likely to think of Tokyo (or have no clue) than to think of an old bit of French colonial history. These sorts of homophones exist between the CJKV languages all the time.
And to wrap up the fun fact, if Tokyo is the most well-known “eastern capital” when considering the characters in the CJKV language s, we also have the northern capital (北京, Beijing, or formerly “Peking”) and the southern capital (南京, Nanjing). There is no real consensus on where the “western capital” is.
Vietnamese speakers will in-fact say Bắc Kinh when referring to the Chinese capital city, and I’m not totally sure why it’s an exception like that. Then again, some newspapers will also print the capital city of the USA as Hoa Thịnh Đốn (華盛頓) rather than “Washington, DC”, because that’s how the Chinese wrote it down first, and then brought to Vietnamese, and then changed to the modern script. To be abundantly clear, it shouldn’t be surprising to have a progression from something like “Wa-shing-ton” to "hua-shen-dun’ to “hoa-thinh-don”.
- Comment on Why do languages sometimes have letters which don't have consistent pronunciations? 4 days ago:
As a case study, I think Vietnamese is especially apt to show how the written language develops in parallel and sometimes at odds with the spoken language. The current alphabetical script of Vietnamese was only adopted for general use in the late 19th Century, in order to improve literacy. Before that, the grand majority of Vietnamese written works were in a logographic system based on Chinese characters, but with extra Vietnamese-specific characters that conveyed how the Vietnamese would pronounce those words.
The result was that Vietnamese scholars pre-20th Century basically had to learn most of the Chinese characters and their Cantonese pronunciations (not Mandarin, since that’s the dialect that’s geographically father away), and then memorize how they are supposed to be read in Vietnamese, then compounded by characters that sort-of convey hints about the pronunciation. This is akin to writing a whole English essay using Japanese katakana.
Also, the modern Vietnamese script is a work of Portuguese Jesuit scholars, who were interested in rendering the Vietnamese language into a more familiar script that could be read phonetically, so that words are pronounced letter-by-letter. That process, however faithful they could manage it, necessarily obliterates some nuance that a logographic language can convey. For example, the word bầu can mean either a gourd or to be pregnant. But in the old script, no one would confuse 匏 (gourd) with 保 (to protect; pregnant) in the written form, even though the spoken form requires context to distinguish the two.
Some Vietnamese words were also imported into the language from elsewhere, having not previously existed in spoken Vietnamese. So the pronunciation would hew closer to the origin pronunciation, and then to preserve the lineage of where the pronunciation came from, the written word might also be written slightly different. For example, nhôm (meaning aluminum) draws from the last syllable of how the French pronounce aluminum. Loanwords – and there are many in Vietnamese, going back centuries – will mess up the writing system too.
- Comment on Noob RAM speed question 6 days ago:
I’m not a computer engineer, but I did write this comment for a question on computer architecture. At the very onset, we should clarify that RAM capacity (# of GBs) and clock rate (aka frequency; eg 3200 MHz) are two entirely different quantities, and generally can not be used to compensate for the other. It is akin to trying to halve an automobile’s fuel tank in order to double the top-speed of the car.
Since your question is about performance, we have to look at both the technical impacts to the system (primarily from reduced clock rate) and then also the perceptual changes (due to having more RAM capacity). Only by considering both together can be arrive as some sort of coherent answer.
You’ve described your current PC as having an 8 GB stick of DDR4 3200 MHz. This means that the memory controller in your CPU (pre-DDR4 era CPUs would have put the memory controller on the motherboard) is driving the RAM at 3200 MHz. A single clock cycle is a square wave that goes up and then goes down. DDR stands for “Double Data Rate”, and means that a group of bits (called a transaction) are sent on both the up and the down of that single clock cycle. So 3200 MHz means the memory is capable of moving 6400 million transactions per second (6400 MT/s). For this reason, 3200 MHz DDR4 is also advertised as DDR4-6400.
Some background about DDR versus other RAM types, when used in PCs: the DDR DIMMs (aka sticks) are typically made of 8 visually-distinct chips on each side of the DIMM, although some ECC-capable DIMMs will have 9 chips. These are the small black boxes that you can see, but they might be underneath the DIMM’s heatsink, if it has one. The total capacity of these sixteen chips on your existing stick is 8 GB, so each chip should be 512 MB. A rudimentary way to store data would be for the first 512 MB to be stored in the first chip, then the next 512 MB in the second chips, and so on. But DDR DIMMs do a clever trick to increase performance: the data is “striped” across all 8 or 16 chips. That is, to retrieve a single Byte (8 bits), the eight chips on one face of the DIMM are instructed to return their stored bit, and the memory controller composes these into a single Byte to send to the CPU. This all happens in the time of a single transaction.
We can actually do that on both sides of the DIMM, so two Bytes could be retrieved at once. This is known as dual-rank memory. But why should each chip only return a single bit? What if each chip could return 4 bits at a time? If all sixteen chips support this 4-bit quantity (known as memory banks), we would get 64 bits (8 Bytes), still in the same time as a single transaction. Compare to earlier where we didn’t stripe the bits across all sixteen chips: it would have taken 16 times longer for one chip to return what 16 chips can return in parallel. Free performance!
But why am I mentioning these engineering details, which has already been built into the DIMM you already have? The reason is that it’s the necessary background to explain the next DDR hat-trick for memory performance: multi-channel memory. The most common is dual channel memory, and I’ll let this “DDR4 for Dummies” quote explain:
A memory channel refers to DIMM slots tied to the same wires on the CPU. Multiple memory channels allow for faster operation, theoretically allowing memory operations to be up to four times as fast. Dual channel architecture with 64-bit systems provides a 128-bit data path. Memory is installed in banks, and you have to follow a couple of rules to optimize performance.
Basically, dual-channel is kinda like having two memory controllers for the CPU, each driving half of the DDR in the system. On an example system with two 1 GB sticks of RAM, we could have each channel driving a single stick. A rudimentary use would be if the first 1 GB of RAM came from channel 1, and then the second 1 GB came from channel 2. But from what we saw earlier with dual-rank memory, this is leaving performance on the table. Instead, we should stripe/interlace memory accesses across both channels, so that each stick of RAM returns 8 Bytes, for a total of 16 Bytes in the time of a single transaction.
So now let’s answer the technical aspect of you question. If your system supports dual-channel memory, and you install that second DIMM into the correct slot to make use of that feature, then in theory, memory accesses should double in capacity, because of striping the access across two independent channels. The downside is that for that whole striping thing to work, all channels must be running at the same speed, or else one channel would return data too late. Since you have an existing 3200 MHz stick but the new stick would be 2400 MHz, the only thing the memory controller can do is to run the existing stick at the lower speed of 2400 MHz. Rough math says that the existing stick is now operating at only 66% of its performance, but from the doubling of capacity, that might lead to 133% of performance. So still a net gain, but less than ideal.
The perceptual impact has to do with how a machine might behave now that it has 16 GB of memory, having increased from 8 GB. If you were only doing word processing, your existing 8 GB might not have been fully utilized, with the OS basically holding onto it. But if instead you had 50 browser tabs open, then your 8 GB of RAM might have been entirely utilized, with the OS having to shuffle memory onto your hard drive or SSD. This is because those unused tabs still consume memory, despite not actively in front of you. In some very extreme cases, this “thrashing” causes the system to slow to a crawl, because the shuffling effort is taking up most of the RAM’s bandwidth. If increasing from 8 GB to 16 GB would prevent thrashing, then the computer would overall feel faster than before, and that’s on top of the theoretical 33% performance gain from earlier.
Overall, it’s not ideal to mix DDR speeds, but if the memory controller can drive all DIMMs at the highest common clock speed and with multi-channel memory, then you should still get a modest boost in technical performance, and possibly a boost in perceived performance. But I would strongly recommend matched-speed DDR, if you can.
- Comment on Is it insane to run a home server on an old laptop instead of a Raspberry Pi for self-hosting - what do I need to worry about? 1 week ago:
Overall, it looks like you’re done your homework, covering the major concerns. What I would add is that keeping an RPi cool is a consideration, since without even a tiny heatsink, the main chip gets awfully hot. Active cooling with a fan should be considered to prevent thermal throttling.
The same can apply to a laptop, since the intended use-case is with the monitor open and with the machine perched upon a flat and level surface. But they already have automatic thermal control, so the need for supplemental cooling is not very big.
Also, it looks like you’ve already considered an OS. But for other people’s reference, an old x86 laptop (hopefully newer than i686) has a huge realm of potential OS’s, including all the major *BSD distros. Whereas I think only Ubuntu, Debian, and Raspbian are the major OS’s targeting the RPi.
One last thing in favor of choosing laptop: using what you have on hand is good economics and reduces premature ewaste, as well as formenting the can-do attitude that’s common to self hosting (see: !selfhosted@lemmy.world).
TL;DR: not insane
- Comment on How do you beat post-work floppiness? 1 week ago:
At the very minimum, gym in the morning (but after coffee/caffeine, plus the time for it to kick in) is the enlightened way. It helps if your gym is nearby or you have a !homegym@lemmy.world .
I personally also use the wee morning hours to reconcile my financial accounts, since ACH transactions in the USA will generally process a day faster if submitted before 10:30 ET.
- Comment on Are physical mail generally not under surveillance? If everyone suddently ditched electronic communications and start writing letters, would governments be able to practically surveil everyone? 1 week ago:
The photos taken by the sorting machines are of the outside of the envelope, and are necessary in order to perform OCR of the destination address and to verify postage. There is no general mechanism to photograph the contents of mailpieces, and given how enormous the operations of the postal service is, casting a wide surveillance net to capture the contents of mailpieces is simply impractical before someone eventually spilled the beans.
That said, what you describe is a method of investigation known as mail cover, where the useful info from the outside of a recipient’s mail can be useful. For example, getting lots of mail from a huge number domestic addresses in plain envelopes, the sort that victims of remittance fraud would have on hand, could be a sign that the recipient is laundering fraudulent money. Alternatively, sometimes the envelope used by the sender is so thin that the outside photo accidentally reveals the contents. This is no different than holding up an envelope to the sunlight and looking through it. Obvious data is obvious to observe.
In electronic surveillance (a la NSA), looking at just the outside of an envelope is akin to recording only the metadata of an encrypted messaging app. No, you can’t read the messages, but seeing that someone received a 20 MB message could indicate a video, whereas 2 KB might just be one message in a rapid convo.
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 2 weeks ago:
So no, no billion dollar company can make their own training data
This statement brought along with it the terrifying thought that there’s a dystopian alternative timeline where companies do make their own training data, by commissioning untold numbers of scientists, engineers, artists, researchers, and other specialties to undertake work that no one else has. But rather than trying to further the sum of human knowledge, or even directly commercializing the fruits of that research, that it’s all just fodder to throw into the LLM training set. A world where knowledge is not only gatekept like Elsevier but it isn’t even accessible by humans: only the LLM will get to read it and digest it for human consumption.
Written by humans, read by AI, spoonfed to humans. My god, what an awful world that would be.
- Comment on Why is it called "overseas" even if a dispora population move to a place connected by land? 2 weeks ago:
A few factors:
- Human population centers historically were built by natural waterways and/or by the sea, to enable access to trade, seafood, and obviously, water for drinking and agriculture
- When the fastest mode of land transport is a horse (ie no railways or automobiles), the long-distance roads between nations which existed up to the 1700s were generally unimproved and dangerous, both from the risk of breakdown but also highway robbery. Short-distance roads made for excellent invasion routes for an army, and so those tended to fall under control of the same nation.
- Water transport was (and still is) capable of moving large quantities of tonnage, and so was the predominant form of trade, only seeing competition when land transport improved and air transport was introduced.
So going back centuries when all the “local” roads are still within the same country (due to conquest), and all the long-distance roads were treacherous, slow, and usually uncomfortable (ie dysentery on the Oregon Trail), the most obvious way to get to another country would have been to get a ride on a trading ship. An island nation would certain regard all other countries as being “overseas”, but so would an insular nation hemmed in by mountains but sitting directly on the sea.
TL;DR: for most of human history, other countries were most reasonably reached by sea. Hence “overseas”.
- Comment on What is the catalyst that actually causes (financial) bubbles to burst? 2 weeks ago:
Truly, it could be anything that unsettles the market. A bubble popping is essentially a cascading failure, where the dominos fall, when the house of cards collapses, when fear turns into panic, even when everyone is of sound mind.
The Great Depression is said to have started because of a colossally bad “short squeeze”, where investors tried to corner the market on copper futures, I think. That caused some investment firms to go broke, which then meant trust overall was shaken. And then things spiraled out of control thereafter, irrespective of whether the underlying industries were impacted or not.
So too did the Great Financial Crisis in 2008, where the USA housing market collapsed, and the extra leverage that mortgagees had against their home value worked against them, plunging both individuals and mortgage companies into financial ruin. In that situation, the fact that some people lost their homes, couples with them losing their jobs due to receding market, was an unvirtuous cycle that fed itself.
I can’t speculate as to what will pop the current bubble, but more likely than not, it will be as equally messy as bubbles of yore. But much like the Big One – which in California refers to another devastating earthquake to come – it’s not a question of if but when.
Until it (and the AI bubble popping) happens though, it is not within my power to do much about it, and so I’ll spend my time preparing. That doesn’t mean I’m off to move my retirement funds into S&P500 ex-AI though, since even the Dot Com bubble produced gains before it went belly up. I must reiterate that no one knows when the bubble will pop, so getting on or getting off now is a financial risk.
It’s a rollercoaster and we’re all strapped in, whether we like it or not.
- Comment on How many virtual machines can you nest? 2 weeks ago:
All I can offer you is USA interstate commerce, notable rail vs road innovations in the 18th Century, North American electricity supplies, and bicycle wheel construction.
- Comment on How many virtual machines can you nest? 2 weeks ago:
I’ll take a stab at the question, but we will need to dive into a small amount of computer engineering to explain. To start, I am going to assume an x86_64 platform, because while ARM64 and other platforms do support hardware-virtualization, x86_64 is the most popular and most relevant since its introduction at the beginning of the 2000s. My next assumption is that we are using non-ancient hardware, for reasons that will become clear.
As a base concept, a virtual machine means that we have a guest OS that runs subordinate to a host OS on the same piece of hardware. The host OS essentially treats the guest OS as though it is just another userspace process, and gives the guest some time on the CPU, however the host sees fit. The guest OS, meanwhile, is itself a full-blown OS that manages its own userspace processes, divying out whatever CPU time and memory that it can get from the host OS, and this is essentially identical to how it would behave if the guest OS were running on hardware.
The most rudimentary form of virtual machine isolation was achieved back in the 1960s, with software-based virtual machines. This meant that the host emulated every single instruction that the guest OS would issue, recreating every side-effect and memory access that the guest wanted. In this way, the guest OS could run without change, and could even have been written in an entirely different CPU architecture. The IBM System/360 family of mainframes could do this, as a way of ensuring business customers that their old software could still run on new hardware.
The drawbacks are that the performance is generally less-than-stellar, but in an era that valued program correctness, this worked rather well. Also, the idea would carry into higher level languages, most notably the Java Virtual Machine (JVM). The Java language generally compiles down to bytecode suitable to run on the JVM (which doesn’t really exist), and then real machines would essentially run a JVM emulator to actually call the program. In this way, Java is a high-level language that can run anywhere, if provided a JVM implementation.
An advancement from software virtualization is hardware-assisted virtualization, where some amount of the emulation task is offloaded to the machine itself. This is most relevant when virtualizing the same CPU architecture, such as an x86_64 guest on an x86_64 host. The idea is that lots of instructions have no side-effects that affect the host, and so can be run natively on the CPU, then return control back to the host when reaching an instruction that has side-effects. For example, the basic arithmetic operation of adding two registers imposes no risks to the stability of the machine.
To do hardware-assisted virtualizaton requires that the hardware can intercept (or traps) such instructions as they appear, since the nature of branch statements or conditionals means that we can’t detect in-advance whether the guest OS will issue those instructions or not. The CPU will merrily execute all the “safe” instructions within the scope of the guest, but the moment that it sees an “unsafe” instruction, it must stop and kick back control to the host OS, which can then deal with that instruction in the original, emulated fashion.
The benefit is that the guest OS remains unmodified (yay for program correctness!) while getting a substantial speed boost compared to emulation. The drawback is that we need the hardware to help us. Fortunately, Intel and AMD rose to the challenge once x86-on-x86 software virtualization started to show its worth after the early 2000s, when VMWare et al demonstrated that the concept was feasible on x86. Intel VT-x and AMD-V are the hardware helpers, introducing a new set of instructions that the host can issue, which will cause the CPU to start executing guest OS instructions until trapping and returning control back to the host.
I will pause to note why same-on-same CPU architecture virtualization is even desirable, since compared to the emulation-oriented history, this might not seem immediately useful. Essentially, software-based virtualization achieved two goals, the latter which would become extremely relevant only decades later: 1) allow running a nested “machine”, and 2) isolate the nested machine from the parent machine. When emulation was a given, then isolation was practically assured. But for same-on-same virtualization, the benefit of isolation is all that remains. And that proved commercially viable when silicon hit a roadblock at ~4 GHz, and we were unable to make practical single-core CPUs go any faster.
That meant that growing compute would come in the form of multiple cores per CPU chip, and this overlapped with a problem in the server market where having a separate server for your web server, and database server, and proxy server, all of these cost money. But seeing as new CPUs have multiple cores, it would save a bunch of money to consolidate these disparate servers into the same physical machine, so long as they could be assured that they were still logically running independently. That is to say, if only they were isolated.
Lo and behold, Intel VT-X and AMD-V were introduced just as core counts were scaling up in the 2010s. And this worked decently well, since hardware-assisted virtualization was a fair order of magnitude faster than trying to emulate x86, which we could have done but it was just too slow for commercialization.
Some problems quickly emerged, due to the limitations of the hardware assistance. The first has to do with how the guest OS expects to operate, and the second to do with how memory in general is accessed in a performant manner. The fix for these problems involves more hardware assistance features, but also relaxing the requirement that the guest OS remain unchanged. When the guest OS is modified to be better virtualized, this is known as paravirtualization.
All modern multi-tasking OS with non-trivial amounts of memory (which would include all guest OS’s that we care about) does not organize its accessible memory as though it were a "flat’ plane of memory. Rather, memory is typically “paged” – meaning that it’s divvied out in pre-ordained chunks, such as 4096 Bytes – and frequently also makes use of “virtual memory”. Unfortunately, this is a clash in nomenclature, since “virtual memory” long predates virtualization. But understand that “virtual memory” means that userspace programs won’t see physical addresses for its pointers, but rather a fictional address which is cleverly mapped back to physical addresses.
When combining virtual memory with pages, the OS is able to give userspace programs the appearance of near-unlimited, contiguous memory, even though the physical memory behind those virtual addresses are scattered all over the place. This is a defining feature of an OS: to organize and present memory sensibly.
The problem for virtualization is that if the host OS is already doing virtual+paged memory management, then it forces the guest OS to live within the host’s virtual+paged environment, all while the guest OS also wants to do its own virtual+paged memory management to service its own processes. While the host OS can rely upon the physical MMU to efficiently implement virtual+paged memory management, the guest OS cannot. And so the guest OS is always slowed down by the host having to emulate this job.
The second issue relates to caching, and how a CPU can accelerate memory accesses if it can fetch larger chunks of memory than what the program might be currently accessing, in anticipation. This works remarkably well, but only if the program has some sense of locality. That is, if the program isn’t reading randomly from the memory. But from the hardware’s perspective, it sees both the host OS and guest OS and all their processes, which starts to approximate a Gaussian distribution when they’re all running in tandem, and that deeply impacts caching performance.
The hardware solution is to introduce an MMU that is amenable to virtualization, one which can manage both the host OS’s paged+virtual memory as well as any guest OS’s paged+virtual memory. Generally, this is known as Second Level Address Translation (SLAT) and is implemented as AMD’s Rrapid Virtualizaion Indexing or Intel’s Extended Page Tables. This feature allows the MMU to consider page tables – the basic unit of any MMU – that nest below a superior page table. In this way, the host OS can delegate to the guest a range of pages, and the guest OS can manage those pages, all while the MMU gives the guest OS some acceleration because this is all done in hardware.
This also helps with the caching situation, since if the MMU is aware that the memory is in a nested page table (ie guest OS memory), then that likely also means the existing cache for the host is irrelevant, and vice-versa. An optimization would be to split the cache space, so that it remains relevant only to the host or to the guest, without mixing up the two.
With all that said, we can now answer your question about what would happen .With hardware extensions like VT-x and SLAT, I would expect that cascading VMs would consume CPU and memory resources almost linearly, due to each guest OS adding its own overhead and running its own kernel. At some point, the memory performance would slow to a crawl, since there’s a limit on how much the physical cache can be split. But the CPU performance would likely be just fine, such as if you ran a calculation for digits of Pi on the 50th inner VM. Such calculations tend to use CPU registers rather than memory from DDR, and so could run natively on the CPU without trapping to any of the guests.
But I like the other commenter’s idea of just trying it and see what happens.
- Comment on Why don't cars have a way to contact nearby cars like fictional spaceships do? 3 weeks ago:
What if the ability to communicate freely with other drivers made the experience closer to walking in a crowd.
In a dense crowd, the information being exchanged amongst the crowd is enormous. It is a constant negotiation, of different parties trying to get somewhere but also trying (hopefully) to respect other people’s space. And the stakes are lower, because bumping into someone is fine at 1 kph but totally unacceptable at 50 kph. And humans are dynamically adjustable, like raising ones arms so that a stroller can pass more easily. Cars can’t really do that (except Transformers: Robots In Disguise).
In a crowded bazaar, the bandwidth from reading people’s facial cues, from seeing whether they’re distracted by goods on display or from their Instagram posts, plus what people actually say – and what they don’t say – and how quickly or slowly they walk. All of that is context that is necessary to participate in the activity of passing through the crowd, and I think that cost-optimized technology to exchange the same amount of info while also needing to react 50x faster and deterministically, with safety standards suitable for 2-tonne machines that already kill and maim thousands per year, that’s not really feasible.
- Comment on Why don't cars have a way to contact nearby cars like fictional spaceships do? 3 weeks ago:
Exactly this. I’ve long had a thought that if all automobiles were like the Invisible Boatmobile from SpongeBob, then most of the suble cues between humans would make it easier to understand intentions, with corresponding reduction in misapprehension and collisions.
That said, humans simply are poorly adapted to traveling at 100 kph, so whose to say if these cues are even understandable at high speed. And of course, it’s downright impossible to see those details when blinded by mutual headlights on a rural highway at night.
- Comment on Why don't cars have a way to contact nearby cars like fictional spaceships do? 3 weeks ago:
As a thought experiment, I’m prepared to momentarily set aside the practical and societal issues to see whether a mechanism for motorists to communicate to any other nearby motorists would have a use.
To set some ground rules, I think it’s fair to assert that such a communication mechanism is not meant for lollygagging, but would be used for some sort of operational reason that is related to driving a motor vehicle. So the use-cases would be broader than just safety or traffic management, and could include coordination between drivers all heading to the same place. This criteria means we won’t require the generality of a mobile phone network (which can call anyone) and instead is very local.
Some examples that might use this mechanism:
- Broadcasting a safety hazard to motorists further behind, such as objects in the road or right after a sharp curve
- Telling a specific car that their trailer has lost a strap, that it is flailing in the wind, and it might get caught under the rear wheels
- Informing all cars in the camping group platoon that you’ll be stopping at Micky-D’s for a bathroom break, and they should keep going
- For two cars that already drove over some sharp road debris, they can look at each other’s cars to relay any observable damage, to decide whether to stop on the shoulderless highway or keep driving to an exit
This selection of examples represent exigent circumstances that arise while driving, rather than something which could have been planned/coordinated in advance. More over, they cover scenarios that are one-to-many or one-to-one, as well as unilateral messages or bilateral conversations.
We need to also consider what existing cues already exist between motorists, some of which are quite dated:
- Honking (so that someone else will do something that fixes the situation)
- Waving through (to indicate that you are yielding and they can proceed)
- Turning an invisible crank (asking them to roll down their window, despite manual windows being very uncommon now in the USA)
- High-beam flashing (to request they change lanes so that you can pass them; or at an intersection, that you’re yielding and they can proceed)
- Stopping and opening the hood (the time-tested signal that your car has malfunctioned and you need assistance)
- Turning on hazard lights (you have unexpectedly stopped somewhere and cannot move; or you are traveling very slowly; or otherwise, some unspecified hazard exists and you need space to manoeuvre and everyone should be on-alert)
- Left/right indicators (you are going to turn or change lanes; if a parking space, you are claiming that parking space)
Before we even check if these existing cues can be used for the examples above, we can see there are already a fair amount of them. The problem with cues, though, is that they might not be universally understood (eg a motorist from flat Nebraska might not understand the hazard lights on a slow-going truck climbing up Tejon Pass heading in/out of Los Angeles). Moreso, some cues are downright dangerous in certain circumstances, such as waving a motorist into an intersection but neither could see the oncoming fire truck that strikes them.
Notice that for all these cues, only fairly simply messages can be conveyed, and for anything more complicated, it is necessary to “turn the invisible crank”, meaning that you and them need to roll down your windows and talk directly about what the complex situation is. So if a situation is simple, then it’s likely one of the existing cues will work. But if not, then maybe our new car-to-car system might turn out to be useful. Let’s find out.
Scenario 1 is partially addressed by one very long honk or using hazard lights, depending on if the hazard is avoidable or if the hazard requires all traffic to halt. If it is about a small object in the road, then perhaps no message is needed at all, since we assume all motorists are paying attention to the road. If the hazard is a hidden one – such as behind a curve or it’s black-ice – then only hazard lights would help, but it might not be clear to following motorists what the issue is. They would only know to remain alert.
A broadcast system could be effective, but only to a point: motorists cannot spend more than a sentence or maybe even a few words to understand some situation that may only be seconds away. We know this from how roadway signs are written: terse and unambiguous. So if a broadcast system did exist for hazards, then it must be something which can be described in fewer than maybe 5 words. This means the system isn’t useful for info about which parking lots at LAX have room, for example.
Scenario 2 involves a hazard that is moving, and can be addressed by honking and high-beams to get the motorist’s attention. There is no ability to convey the precise nature of the hazard, but outside of nighttime environments where people may be hesitant to stop just because someone is trying to tell them something on a rural Interstate, this generally is enough to prevent a roadway calamity.
But supposing we did want to use our new system to send that motorist a message, the same concern from earlier must be respected: it is improper to flood a motorist with too much info when the driving task doesn’t really allow for much time to do anything else. An apt comparison would be to air transport pilots, where a jetliner at cruising altitude actually does have a lot of spare time, but not when preparing for takeoff or landing. Driving an automobile is a continual task, and for the time when a car is stopped at a traffic light, then there is virtually no need for a car-to-car communication system; just yell. The need for ACARS for automobiles [pun intended] is looking less useful, so far.
Scenario 3 is similar to Scenario 2, but is a one-to-many message. But given how such exchanges tend to also become multilateral (“can you get me a Big Mac as well?” and “well, we don’t have to be at the camp site until 4:20”), this once again starts to become a distraction from the driving task.
Scenario 4 is probably the most unique, because it rarely happens: motorists always have the option of stopping, although stopping can itself create a hazard if the location is not great (eg left lane on an American freeway). It would be truly unusual for two cars to have struck something AND then need to quickly decide if they can press on toward the nearest exit (eg minor body damage) or if they must stop immediately (eg a fuel rupture that starts a small fire beneath the vehicle) AND there is someone else who can mutually exchange info about the damage.
It’s such a contrived scenario, because I actually made it up, based on the similar situation that occurs for aircraft that suffer damage while in the air. In such situations, the pilot would need external support, which can come from a nearby aircraft, or ATC, or an escort fighter jet. For example, if an aircraft cannot confirm safe extension of the landing gear, diagnosing the problem is helped by a nearby news helicopter confirming that the landing gear is clearly visible and locked.
Alternatively, if a departing aircraft has struck a piece of metal dropped by an earlier Continental Airlines DC-10, and that bit of metal causes the left tire to explode, further causing a fuel rupture from the left tank and an uncontrollable fire slowly destroying the wing, it would be very useful if ATC can tell the pilots ASAP before the aircraft is going too fast to abort the takeoff, resulting in an inability to fly and an eventual crash into a hotel.
I bring up my contrived automobile Scenario 4 because it shows how things could always be slightly different if a small factor was simply changed, if maybe there were better warnings to the pilots from their aircraft, or if the Continental plane was better maintained, or if Charles de Gaulle ATC was just a little bit faster to radio to the pilots. So it’s perfectly natural to think that by having this one aspect of the driving experience changed, maybe there’s a lot of value we could get from it. Indeed, the Swiss Cheese Model of accident causation tells us that any one layer could have been different and thus stop the holes from lining up.
But from this thought experiment, we can see that the existing cues between motorists already serve the most common reasons for needing to communicate while on the road. And anything more complicated messages than “I would like to pass” become a distraction and thus less useful and more dangerous in practice. Aviation knows full-well the dangers of introducing a fix which ends up causing more problems in the long-run.
- Comment on What taxes are on a can of NOS in California? 3 weeks ago:
Overstating the tax and then being charged less, that’s going to be less jarring to consumers than the opposite situation, where people are charged more than what it says on the can haha
But I get it: taxes are hard, since even the supposed simplicity of a “flat tax” rate still requires exemptions and exceptions everywhere. Otherwise, people will get away with paying less tax than they ought, or more tax than is reasonably fair, or that the purpose of the tax is wholly defeated.
Taxation systems: simple to administer, easy to understand, fair. Pick at most two. Anyone who says they’ve come up with a system that achieves all three perfectly is a liar or a conman.
- Comment on What taxes are on a can of NOS in California? 3 weeks ago:
Some background: retailers in California that sell taxable goods have the choice of either including sales tax in their posted prices (“tax incl”) or not (“+tax”). Whichever they choose, they must disclose which method they use and must be consistent. Retailers of tax-free goods obviously don’t need to make this choice. Vending machines invariably always include the sales tax (and CRV if packaged beverage) to make the price a round number.
non-prepared food items no longer have sales tax
This is mostly correct, since the sales tax on food is generally on hot food that is otherwise unprepared. If any other preparation occurs (like with a sandwich from a sandwich shop), then that added preparation makes the whole sandwich taxable, even though the individual food ingredients would have been tax-exempt.
The only taxes I know I would be paying for a canned soda would be the CRV/California Redemption Tax, which is a flat $0.10 per aluminum can.
Minor quibble: CRV is California Redemption Value and is a refundable fee. The tiny distinction from a tax is that fees are potentially refundable (and this is, upon recycling the container) whereas a tax is virtually never refunded in any scenario. That said, the California Constitution specifies that taxes and fees have the same requirements when it comes to approving them, since the payment of a tax or fee is mandatory. But I digress.
Anyway, the rates that you described is not correct. The current rates are:
5 cents for containers less than 24 ounces 10 cents for containers 24 ounces or larger
And only applies to aluminum, glass, plastic, and bi-metal.
So on a can of NOS (16 fl oz) pays just 5 cents, but a 24 fl oz can will pay 10 cents. I’m not sure which size can you were looking at, but I’m going to guess it was a 16 fl oz can.
If 16 fl oz priced at $1.98, then add 5 cents CRV, that’s $2.03. The California Department of Tax and Fee Administration (CDTFA) writes that:
Sales of noncarbonated drinks are generally not taxable, but their containers may be subject to the CRV. On the other hand, sales of carbonated and alcoholic beverages are generally taxable, and the CRV fee that is charged for their containers is taxable
And:
Under SNAP, we consider items purchased with CalFresh benefits as sales to the United States government, and those items are therefore exempt from tax in California.
And:
Sales of eligible food items purchased with CalFresh benefits are exempt from tax, even if the sale of the food item is normally taxable. For example, the sales of carbonated beverages, ice, and food coloring are exempt from tax when purchased with CalFresh benefit
So if you’re somewhere within, say, San Francisco where the sales tax rate is 8.625%, then that brings the $2.03 up to $2.21 if bought without CalFresh, since both sales tax and CRV apply, and CRV is itself taxable. But on CalFresh, neither sales tax nor CRV apply, which would just be $1.98.
This all appears to align with your observations.
- Comment on Did it really used to be common for guys to go to a bar every night like in Cheers or The Simpsons? 4 weeks ago:
I’d say the qualities of the average American park leaves much to be desired, when compared to NYC Central Park, San Diego’s Balboa Park, or SF’s Presidio.
In suburban areas, the municipal park tends to be a monoculture of grass plus maybe a playground, a parking lot, and if lucky, a usable bathroom. Regional parks are often nicer, with amenities like pickleball courts or a BMX park, though asking for benches (not rocks or concrete verges, but actually bench seats) and shade might be a stretch.
My point is that the USA has fewer parks and public squares than it ought to. I don’t mean just a place to go jogging or to push a stroller along, but a proper third space where people actively spend time and create value at. Where street vendors congregate because that’s also where people congregate. A place that people – voluntarily, not by necessity – would like to be. A destination in its own right, where even tourists will drop by and take in the air, the sights, and the social interactions.
Meanwhile, some parts of the USA actively sabotage their parks, replacing normal park furniture with versions that are actively hostile to homeless people, while alienating anyone that just wants an armrest as they sit down. Other municipalities spend their Parks & Rec funds on the bare minimum of parks, lots that are impractically tiny. Why? Because a public park can be used to exclude registered sex offenders from a neighborhood, leading to the ludicrous situation where whole cities are an exclusion zone. Regardless of one’s position on how to punish sex offenses, the denial of housing and basic existence is, at best, counterproductive.
So I reiterate: the USA might have a good quantity of parks, but not exactly good quality of parks. People will socialize online unless they are given actual options to socialize elsewhere. And IRL options would build value locally, whereas online communities only accrue to the benefit of the platforms (eg Facebook, WhatsApp) they run on.
- Comment on Did it really used to be common for guys to go to a bar every night like in Cheers or The Simpsons? 4 weeks ago:
Even with NA (low/non-alcoholic) beverages, it’d be nice to have third places that don’t come with an obligation to spend money.
To be clear, I’m not asking for places that ban spending money, but there are third places like parks (eg NYC Central Park) that are destinations in their own right, but one can also spend money there, such as buying stuff and having a picnic on the grass, or bringing board games and meeting up with friends. Or strolling the grounds astride rental e-bikes. Or free yoga.
Where there’s an open space, people make use of it. But we don’t really have much of that in the USA, that isn’t tied up as a parking lot, an open-space preserve (where people shouldn’t tred upon to protect wildlife), or are beyond reasonable distances (eg BLM land in the middle of Nevada).
- Comment on Is Louis Rossmann a fascist like futo? 4 weeks ago:
At the very minimum, give the readers a chance to understand the context of the question: hachyderm.io/@dalias/115418530488528338
- Comment on What are some good things to purchase to add a new distraction to my life? 4 weeks ago:
It very much depends, I think. Ham radio was really helpful to me during 2020 because it was a social activity that was compatible with distancing requirements, and is a great way to talk with people afar. As in, other continents but also local folks as well.
Fishing, watercraft, and woodworking all have different prerequisites, like a nearby body of water or the space for equipment. They also require some logistical planning, like fishing licenses, how to identify and prep fish, and where to source wood. These things are often easier to learn if you know someone who already partakes in the activity.
But for civil advocacy, that one has no tangible result that you can put in the living room, earns no awards or points, and puts you directly in the public spotlight, ugly as it may be. And yet, despite all that, it has the potential to impact the greatest number of people in the most accessible way. Paraphrasing a Greek proverb, to commit to this endeavor knowing full well that it will never serve to yourself a benefit, that is a sign of a great and virtuous citizen
All the activities I’ve listed are activities that hone personal development, and can be passed on to another generation, just in case you wanted even more engrossment.
- Comment on What are some good things to purchase to add a new distraction to my life? 4 weeks ago:
!homelab@lemmy.ml can easily become very involved.
But for other activities, fishing, watercraft (motorized or not), woodworking, ham radio, and civic advocacy (ie public transport, housing, anti-corruption). All of these can easily be a lifetime’s worth.
- Comment on How much more progressive are European views as compared to progressives in America? 4 weeks ago:
One thing which isn’t immediately apparent, even to Americans themselves, is that the large American political parties are less equivalent to individual political parties elsewhere, and are closer to “uneasy coalitions”, like those found in Europe involving multiple parties trying (and maybe failing) to form a government. That makes it harder to draw broad conclusions like “USA Democrats would be right-of-center” because progressives and “DINOs” (Democrats in name only) within the party would be left-wing or right-wing, respectively. Logically, the same applies to the Republican party, although ranging from RINOs (Republicans in name only) and “moderate Republicans”, to the far-right factions of the party, like neo-Nazis and MAGA.
With that said, what you’re describing sounds similar to social democracy. Not to be confused with democratic socialism, which is generally further along to the left than social democracy, with the goal to reform the state away from private ownership of the means of production and away from capitalism. When Bernie Sanders of Vermont says “I am a socialist”, his positions align well to European social democracy, even though he originally described himself as “democratic socialist”.
But I must reiterate that the precise definition of political ideology is less important than community-building, since that’s how ideology becomes reality.
- Comment on Does free healthcare access increase or decrease the need for medical personnel overall? 1 month ago:
When doing comparisons of the nature posed by the title, it is all-important to establish the baseline criteria. That is, what does the landscape look like just prior to implementing the titular policy?
If starting from the position of the present-day USA, then it is almost certain that free-at-time-of-service universal health care would cause the Bureau of Labor Statistics (BLS) to rewrite their projections for medical personnel jobs, in very much an upward trajectory. After all, middle- and upper-class people that already had decent won’t somehow need more healthcare just because it’s free, but people who have never seen a doctor in their adult life would suddenly have access to a physician. More total patients means more medical staff needed, both short-term and long-term. The latter is because the barrier to annual checkups is all but eliminated, which should also yield better outcomes through early detection of problems and development of working rapports with one’s physician.
If, however, the baseline situation is a functional but private-payer healthcare system in a place with a low Gini coefficient – meaning income is not concentrated in a few people – then it’s more likely that healthcare is accessible to most people. Thus, the jump in patients caused by free healthcare may be minimal or even non-existent. It may, however, be that different segments of this population would benefit by access to a higher standard of quality care under a universal healthcare system, if removing private-payer results in dismantling of legacies caused by racism, colonialism, or whatever else.
After all, that’s one of the tenants of a universal healthcare system: people get the treatment they need, with no regard for who they are or what wealth they have (or not).
- Comment on How does streaming compare to "analog"? 2 months ago:
I’m a bit short on time, but I think “streaming” needs to be broken down into categories of scale. Streaming video from your home Plex server (shout-out to !homelab@lemmy.ml) is a lot different than Netflix’s video delivery system.
The latter intentionally stores the same content in multiple geographies, then with caches at local data centers, and sometimes even caches within your ISP’s network. All of this to distribute the load of millions of users, who can just as easily be in Florida as they might be in Oregon.
Whereas a home server has just one copy of the content, and since it might not always be streaming a video to you, can save power by spinning down drives or other optimizations. It is simply not possible to describe “streaming” when such radically different delivery mechanisms can all plausible be considered as streaming.
- Comment on What are some franchises with characters that personify countries? 2 months ago:
Does a webcomic count as a franchise? satwcomic.com
- Comment on What is a federated alternative to Wikipedia? 2 months ago:
No, I want a decentralized go-to place that I can check many points of view over a subject, just like the Fediverse works today.
I disagree with the premise that multiple POVs on every topic will yield better understandings or discussion. It is the same flaw that Ground News or other services have, which purport to curate POVs from different news media outlets, with the implicit assumption that all the outlets have something useful to offer. This assumption is absolutely balderdash.
The Fediverse is no more – or less – immune from disinformation and other ails, but has better user- and instance-level protections: bans and defederation are effective, because if they weren’t, people here wouldn’t log back on. For Mastodon and Lemmy and other forms of social media, the decentralization has clear and obvious benefits.
A decentralized knowledge-store does not.
There is nothing to fear.
There is everything to fear when knowledge is spread out into small libraries across the land. The historical analog is book-burning incidents that dotted human history, whether to suppress paganism, Mayan culture, or the spread of communism. The modern-day analogy is when Vine went defunct and the content was almost wholly lost to the world. The Fediverse example is when an instance unexpectedly disappears, stranding all its users.
But focusing on a knowledge-store, technology has given us the ability to copy data at rates that outpace all of history’s ecclesiastical scribes put together. We can – and do – preserve the largest datasets (see !datahoarder@lemmy.ml) because it is a matter of resilience. Yet that endeavor has become more difficult precisely because of technology. The Internet Archive faces this issue, because they cannot save what they don’t even know exist.
The Fediverse inhabits a very special Goldilocks zone right now, not unlike Wikipedia, where the availability of interest, capabilities, and materiel allow for the existence of this internet experiment. But fragile it is, and instances are no further than the risk of a DMCA notice, a UK age restriction law, a frivolous but expensive SLAPP suit, or just plain ol running out of money.
If I had spare time and energy and were presented with the option to either: 1) set up a decentralized knowledge store of nebulous benefit, or 2) support the online compendium which I’ve personally used for over two decades now and has helped untold numbers of students and researchers with starting the research into a new-to-them topic, and could do so by using my servers to seed the all-Wikipedia torrents… well, I think the choice is clear.
- Comment on What is a federated alternative to Wikipedia? 2 months ago:
As a website or service, sure. But the Wikipedia has been available to download for offline use since basically its inception. This is how users in places with poor internet connections can still benefit from the Wikipedia. Certainly, the idea of distributing Wikipedia on disc is a bit odd.
But whether it be smuggling books across the Iron Curtain, downloading swaths of paywalled scientific papers from an MIT computer, or accessing information about abortion, the pursuit of knowledge is a chiefly human trait and one not easily suppressed. But if all those, the Wikipedia has the best track record for being openly available and free (as in speech, and as in beer).
- Comment on What is a federated alternative to Wikipedia? 2 months ago:
I think we need to start with what Wikipedia is meant to be, before even considering whether it would be aided through federation. By their own words:
Wikipedia’s purpose is to benefit readers by acting as a widely accessible and free encyclopedia; a comprehensive written compendium that contains information on all branches of knowledge.
Encyclopedias are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, is a tertiary source and provides overviews of a topic.
Content is governed by three principal core content policies – neutral point of view, verifiability, and no original research.
That describes the content intended to go into the Wikipedia, but we need to also mention the distinction between the Wikipedia itself, the MediaWiki software package which powers Wikipedia, and the Wikimedia Foundation.
With MediaWiki, which is FOSS (GPLv2), anyone can set up their own encyclopedia-style volume of articles to host on the web. And that’s exactly what many fandom websites or technical documentation websites do, because that level of detail would not be accepted into the general-knowledge Wikipedia. And you can hardly blame the Wikipedia for wanting to avoid scope-creep.
Likewise, if someone disagrees with how a topic is discussed in a Wikipedia article, they can go in and make the change, provided that they follow the same rules and procedures as everyone else. Yes there are moderators, but even moderators can be moderated. In a way, Wikipedia is a collective effort that somehow democratized editorship and it’s shocking that it hasn’t devolved into major terf wars.
And that’s where the Wikipedia Foundation comes in. They are both the charitable foundation that keeps the Wikipedia servers running, as well as administering the collection, much like how a museum protects cultural treasures. Dissatisfaction with the limited role that the Foundation plays can be solved by forking the Wikipedia; they don’t assert a monopoly on the collective knowledge, and indeed the entire thing can be downloaded for offline use or to host a mirror under separate administration.
With all that said, Wikipedia as a concept hews very closely to the print version of an encyclopedia. It is functionally a really big book, painstakingly edited by untold numbers of people. The fact that it’s not just a bunch of random blog posts is its stength. Wikipedia is not social media; it is distributed editorship.
But supposing you do want a distributed knowledge base, where there might exist multiple versions of an article, please explain why the World Wide Web doesn’t already accomplish that. If the WWW is too general-purpose for your liking, then perhaps something like the DICT protocol is more palatable?
Despite ostensibly dealing with dictionaries, DICT has been used to offer the CIA World Factbook and the Jargon File, which are more like subject-matter specific encyclopedias. As a standardized protocol – even CURL can fetch DICT entries – the Fediverse doesn’t need another protocol to do the same thing.
- Comment on Is streetwear a joke? 2 months ago:
I think you’ll have to provide some examples – ideally as photos – of streetwear fashion. Without any prior research, I only know the term to mean “comfy clothes” that would fall below the typical bar for “casual” dress code
A quick web search shows examples ranging from perfectly reasonable outfits consisting of normally-proportioned shorts, jackets, pants, and shoes. To some outlandish outfits that are prominently displaying designer brands.
And perhaps that’s the crux of the matter: what shows up on the fashion runway or “haute couture” magazines is never descriptive but prescriptive: a designer brand has a vested interest in getting the masses to believe that something is fashion so that they can move product.
Taken to the logical extreme, there is an idea that designer clothes is intentionally outlandish, precisely so that said clothes would never be worn by “normies” in day-to-day activities, and thus can always (and persistently) be projected as high-end.
Commercialized fashion is not a democratic experiment to see what most people want to wear. It is to move product every season. “Designer streetwear” is a poor approximation for what normal people wear when they just want to grab a sandwich from the bodega and then return to watch another episode from Season 2 of The Rehearsal. Maybe this should be called “real streetwear” to distinguish it from so-called designer goods.