litchralee
@litchralee@sh.itjust.works
- Comment on I'm looking for the best free online storage site my files. That is heavily encrypted and respect people's privacy, what would you suggest? 10 hours ago:
Steganography is one possible way to store a message “hidden in plain sight”, and video does often make a seemingly innocuous manner to store a steganographic payload, but in that endeavor, the point is to have two messages: a video that raises no suspicions whatsoever, and a hidden text document with instructions for the secret agent.
Encoding only the hidden message as a video would: 1) make it really obvious that there’s an encoded message, and 2) would not be compatible with modern video compression, which would destroy the hidden message anyway, if encoded directly as black and white pixels.
When video compression is being used, the available bandwidth to store steganographic messages is much lower, due to having to be “coarse” enough to survive the compression scheme. And video compression is designed around how human vision works, so shades of color are the least likely to be faithfully reproduced – most people wouldn’t notice if a light green is portrayed slightly darker than it ought to be. The good news is that with today’s high resolution video streams, the raw video bandwidth is huge and so having even just one-thousandth of that available for encoding hidden data is probably sufficient.
That said, hidden messages != encrypted messages: anyone who notices that there may be a hidden message can try to analyze the suspicious video and retrieve the payload. Encoding, say, English text in a video would still leave patterns, because some English letters (and thus ASCII-encoded bit patterns) will show up more frequently. But fortunately, one can encrypt data and then hide it using steganography. Encrypted data tends to approximate random noise, making it much harder to notice when hidden within the naturally-noisy video data. But bandwidth will be cut some more due to encryption.
TL;DR: it’s very real to hide messages in plain sight, in places people wouldn’t even think of looking closely at. Have you thought about the Roman Empire today?
- Comment on if all communication electronics died on New Year's day, how long would it take for other time zones to notice 1 day ago:
As immediate as the power grid falls apart without constant frequency synchronization, so probably seconds. I do consider North America’s Western, Eastern, and Texas power grids as a communication system, because it does convey the precise 60 Hz AC line rate to every part of the continent.
- Comment on Where does the revenue gathered from taxes go and what is national debt? 3 days ago:
For the benefit of non-Google users, here is the unshortened URL for that Bank of England article: bankofengland.co.uk/…/money-creation-in-the-moder…
With that said, while this comment does correctly describe what the USA federal government does with tax revenues, it is mixing up the separate roles of the government (via the US Treasury) and the Federal Reserve.
The Federal Reserve is the central bank in the USA, and is equivalent to the Bank of England (despite the name, the BoE serves the entire UK). The Federal Reserve is often shortened to “the Fed” by finance people, which only adds to the confusion between the Fed and the federal government. The central bank is responsible for keeping the currency healthy, such as preventing runaway inflation and preventing banking destabilization.
Whereas the US Treasury is the equivalent to the UK’s HM Treasury, and is the government’s agent that can go to the Federal Reserve to get cash. The Treasury does this by giving the Federal Reserve some bonds, and in turn receives cash that can be spent for employee salaries, capital expenditures, or whatever else Congress has authorized. We have not created any new money yet; this is an equal exchange of bonds for dollars, no different than what you or I can do by going to treasurydirect.gov and buying USA bonds: we give them money, they give us a bond. Such government bonds are an obligation that the government must pay in the future.
The Federal Reserve is the entity that can creates dollars out of thin air, bevause they control the interest rate of the dollar. But outside of major financial crisis, they only permit the dollar to inflate around 2% per year. That’s 2% new money being created from nothing, and that money can be swapped with the Treasury, thus the Federal Reserve ends up holding a large quantity of federal government bonds.
Drawing the distinction between the Federal Reserve and the government is important, because their goals can sometimes be at odds: in the late 1970s, the Iranian oil crisis caused horrific inflation, approaching 20%. Such unsustainable inflation threatened to spiral out of control, but also disincentivized investment and business opportunities: why start a new risky venture when a savings account would pay 15% interest? Knowing that this would be the fate of the economy if left unchecked, the Federal Reserve began to sell off huge quantities of its government bonds, thus pulling cash out of the economy. This curbed inflatable, but also created a recession in 1982, because any new venture needs cash but the Feds sucked it all up. Meanwhile, the Reagan administration would not have been pleased about this, because no government likes a recession. In the end, the recession subsided, as did inflation and unemployment levels, thus the economy escaped a doom spiral with only minor bruising.
To be abundantly clear, the Federal Reserve did indeed cause a recession. But the worse alternative was a recession that also came with a collapsed US dollar, unemployment that would run so deep that whole industries lose the workers needed to restart post-recession, and the wholesale emptying of the Federal Reserve and Treasury’s coffers. In that alternate scenario, we would have fired all our guns and have lost anyway.
- Comment on Why aren't tall people also wider? 1 week ago:
From a biology perspective, it may not be totally advantageous to grow in all three dimensions at once. Certainly, as life forms become larger, they also require more energy to sustain, and also become harder to cool (at least for the warm blooded ones). Generally speaking, keeping cool is a matter of surface area (aka skin). But growing double in each of the three dimensions would be 4x more skin than before, but would be 8x more mass/muscle. That’s now harder to keep cool.
So growing needs to be done with intention: growing taller nets some survival benefits, such as having longer legs to run. Whereas growing wider or deeper doesn’t do very much.
But idk mang, I’m in a food coma from holiday dinner, just shooting from the hip lol
- Comment on Why do personal knowledge base applications like Obsidian have all these bells and whistles for querying and parsing metadata/frontmatter but nothing similar for the actual content of notes? 1 week ago:
I recently learned about Obsidian from a friend, but haven’t started using it yet, so perhaps I can offer a perspective that differs from current users of Obsidian or any of the other apps you listed.
To start, I currently keep a hodge-podge of personal notes, some digitally and some in handwriting, covering different topics, using different formats, and there’s not really much that is common between any of these, except that I am the author. For example, I keep a financial diary, where I intermittently document the thinking behind certain medium/long-term financial decisions, which are retained only as PDFs. I also keep README.md files for each of the code repos that I have for electronics and Kubernetes-related projects. Some of my legacy notes are in plain-text .txt file format, where I’m free-form record what I’ve working on, relevant links, and lists of items outstanding or that are TODOs. And then there is the handwritten TODO and receivables list that I keep on my fridge.
Amongst all of this chaos, what I would really like to have the most is the ability to organize each “entry” in each of their respective domains, and then cross-reference them. That is, I’m not looking to perform processing on this data, but I need to organize this data so that it is more easily referenced. For example, if I outline a plan to buy myself a new server to last 10 years, then that’s a financial diary entry, but it would also manifest itself with TODO list items like “search for cheap DDR5 DIMMs” (heaven help me) and “find 10 GbE NIC in pile”. It may also spawn an entry in my infrastructure-as-code repo for my network, because I track my home network router and switch configurations in Git and will need to add new addresses for this server. What I really need is to be able to refer to each of these separate documents, not unlike how DOIs uniquely identify research papers in academic journals.
It is precisely because my notes are near-totally unstructured and disparate that I want a powerful organization system to help sort it, even if it cannot process or ingest those notes. I look at Obsidian – based on what little I know of it – like a “super filing cabinet” – or maybe even a “card catalog” but that might be too old of a concept lol – or like a librarian. After all, one asks the librarian for help finding some sort of book or novel. One does not ask the librarian to rehash or summarize or extract quotes from those books; that’s on me.
- Comment on How long until we can start shorting years to 2 numbers again? 1 week ago:
In the English-speaking world, you can always shorten the year from 4 to 2 digits. But whether: 1) this causes confusion or 2) do you/anyone care if it does, are the points of contention. The first is context-dependent: if a customer service agent over the phone is trying to confirm your date of birth, there’s no real security issue if you only say the 2 digit year, because other info would have to match as well.
If instead you are presenting ID as proof of age to buy alcohol, there’s a massive difference between 2010 and 1910. An ID card and equivalent documentation must use a four digit year, when there is no other available indicator of the century.
For casual use, like signing your name and date on a holiday card, the ambiguity of the century is basically negligible, since a card like that is enjoyed at the time that it’s read, and isn’t typically stashed away as a 100-year old memento.
That said, I personally find that in spoken and written English, the inconvenience of the 4 digit year is outweighed by the benefit of properly communicating with non-American English users. This is because us American speak and write the date in a non-intuitive fashion, which is an avoidable point of confusion.
Typical Americans might write “7/1/25” and say “July first, twenty five”. British folks might read that as 7 January, or (incorrectly) 25 January 2007. But then for the special holiday of “7/4/25”, Americans optionally might say “fourth of July, twenty five”. This is slightly less confusing, but a plausible mishearing by the British would be “before July 25”, which is just wrong.
The confusion is minimized by a full 4 digit year, which would leave only the whole day/month ordering that is ambiguous. That is, “7/1/2025”.
Though I personally prefer RFC3339 dates, which are strictly YYYY-mm-dd, using 4 digit years, 2 digit months, and 2 digit days. This is always unambiguous, and I sign all paperwork like this, unless it explicitly wants a specific format for the date.
- Comment on If you had too, how would go about running a Instagram account? 1 week ago:
For the objective of posting photos to an Instagram account while preserving as much privacy as possible, your approach of a separate machine and uploading using its web browser should be sufficient. That said, Instagram for web could also be sandboxed using a private browsing tab on your existing desktop. Certainly, avoiding an installed app – such as the mobile app – will prevent the most obtuse forms of espionage/tracking.
That said, your titular question was about how to maintain an Instagram account, not just post images. And I would say that as a social media platform, this would include engagement with other accounts and with comments. For that objective, having a separate machine is more unwieldy. But even using a private browsing tab on your existing machine is still subject to the limits that Instagram intentionally imposes on their desktop app: they save all the crucial value-add features for the mobile app, where their privacy invasion is greatest.
To use Instagram in the least obtuse manner means to play the game by their rules, which isn’t really compatible with privacy preservation. To that end, if you did want a full Instagram experience, I would suggest getting a separate, cheap mobile phone (aka a “YOLO phone”) to dedicate to this task. If IG doesn’t need a mobile number, then you won’t even need a working SIM account. Then load your intended images using USB file transfer, and use an app like Imagepipe (available on F-Droid) to strip image metadata.
- Comment on [deleted] 1 week ago:
For the blockchain technology at the very core foundation of cryptocurrencies, it’s a reasonable concept that solves a specific challenge (ie no one can change this value unless they have the cryptographic key) and the notion of an indelible or tamper-evident ledger is useful in other fields (eg certificate revocation lists). Using a blockchain as a component is – like all of engineering – about picking the right tool for the job, so I wouldn’t say that having/not having a block chain imparts any sort of opinionation or qualities of good/bad.
One step above the base technology is the actual application as currency, meaning a representation of economic value, either to store that value (eg gold) or for active trade (eg the €2 Euro coin). All systems of currency require: 1) recognition and general consensus as to their value, and 2) fungibility (ie this $1 note is no different than your $1 note), and 3) the ability to functionally transfer the currency.
Against that criteria, cryptocurrencies have questionable value, as seen by how volatile the cryptocurrency-to-fiat currency markets are. Observe that the USD or Euro or RMB are used for people’s salaries, denominate their home mortgage loans, for buying and selling crude oil, and so on. Yet basically no one uses cryptocurrency for those tasks, no one writes or accepts business-to-business contracts denominated in cryptocurrency, and only a small handful of sovereign states accept cryptocurrency as valid payment. That’s… not a great outlook for circulating the currency.
But for fungibility, cryptocurrency clearly meets that test, and probably exceeds the fiat currencies: there’s no such thing as a “torn” Bitcoin note. There are no forgeries of Etherium. It is demonstrable that a unit of cryptocurrency that came from blood-diamond profits is indistinguishable from a unit that was afforded by wages at a fuel station in Kentucky. There are no “marked notes” or “ink packs” when committing cryptocurrency theft, and it’s relatively easy to launder cryptocurrency through thousands of shell accounts/addresses. To launder physical money a thousand times is physically impossible, and is way too suspicious for digitalized fiat current transfers.
And that brings us to the ability to actually transfer cryptocurrency. While it’s true that it should only be an extra ledger entry to move funds from one address/account to another, each system has costs buried somewhere. Bitcoin users have to pay the transaction costs, or currencies pegged to other currencies have to “execute” a “smart contract”, with attendant verification costs such as proof-of-work or proof-of-stake. These costs simply don’t exist when I hand a $20 note to a fuel station clerk. Or when my employer sends my wages via ACH electronic payment.
Observe how cryptocurrency is traded not at shops with goods (eg Walmart) or shops for currency (eg bureau de change at the airport) but mostly only through specialized ATMs or through online exchange websites. The few people who genuinely do use their cryptocurrency wallets to engage transactions are now well in the minority, overshadowed by scammers, confidence/romance tricksters, investment funds with no idea of what they’re doing except to try riding the bandwagon, and individuals who have never traded financial instruments but were convinced by “their buddy’s friend” who said cryptocurrency was a money-making machine.
To that end, I would say that cryptocurrencies have brought out the worst of financial manipulators, and their allure is creating serious financial perils for everyday people, whether directly as a not-casino casino or to pay a ransomware extortion, or indirectly through the destabilization of the financial system. No one is immune to a breakdown of the financial system, as we all saw in 2008.
I used to like discussing eith people about the technical merits of ledger-based systems, but with the awful repercussions of what they’ve enabled, it’s a struggle to have a coherent conversation without someone suggesting a cryptocurrency use-case. And so I kinda have to throw the whole baby out with the bathwater. Maybe when things quiet down in a few decades, the technology can be revisited from a sober perspective.
- Comment on Does each country have a book/library of the laws of the land that a commoner can consult to check if they're about to do something illegal? 2 weeks ago:
Directly answering the question: no, not every country has such a consolidated library that enumerates all the laws of that country. And for reasons, I suspect no such library could ever exist in any real-life country.
I do like this question, and it warrants further discussion about laws (and rules, and norms), how they’re enacted and enforced, and how different jurisdictions apply the procedural machine that is their body of law.
To start, I will be writing from a California/USA perspective, with side-quests into general Anglo-American concepts. That said, the continental European system of civil law also provides good contrast for how similar yet different the “law” can be. Going further abroad will yield even more distinctions, but I only have so much space in a Lemmy comment.
The first question to examine is: what is the point of having laws? Some valid (and often overlapping) answers:
- Laws describe what is/isn’t acceptable to a society, reflecting its moral ideals
- Laws incentivize or punish certain activities, in pursuit of public policy
- Laws set the terms for how individuals interact with each other, whether in trade or in personal life
- Laws establish a procedure machine, so that by turning the crank, the same answer will output consistently
From these various intentions, we might be inclined to think that “the law” should be some sort of all-encompassing tome that necessarily specifies all aspects of human life, not unlike an ISO standard. But that is only one possible way to meet the goals of “the law”. If instead, we had a book of “principles” and those principles were the law, then applying those principles to scenarios would yield similar result. That said, exactly how a principle like “do no harm” is applied to “whether pineapple belongs on pizza” is not as clear-cut as one might want “the law” to be. Indeed, it is precisely the intersection of all these objectives for “the law” that makes it so complicated. And that’s even before we look at unwritten laws.
The next question would be: are all laws written down? In the 21st Century, in most jurisdictions, the grand majority of new laws are recorded as written statutes. But just because it’s written down doesn’t mean it’s very specific. This is the same issue from earlier with having “principles” as law: what exactly does the USA Constitution’s First Amendment mean by “respecting an establishment of religion”, to use an example. But by not micromanaging every single detail of daily life, a document that starts with principles and is then refined by statute law, that’s going to be a lot more flexible over the centuries. For better/worse, the USA Constitution encodes mostly principles and some hard rules, but otherwise leaves a lot of details left for Congress to fill in.
Flexibility is sometimes a benefit for a system of law, although it also opens the door for abuse. For example, I recall a case from the UK many years ago, where crown prosecutors in London had a tough time finding which laws could be used to prosecute a cyclist that injured a pedestrian. As it turned out, because of the way that vehicular laws were passed in the 20th Century, all the laws on “road injuries” basically required the use of an automobile, and so that meant there was a hole in the law, when it came to charging bicyclists. They ended up charging the cyclist with the criminal offense of “furious driving”, which dated back to an 1860s statute, which criminalized operating on the public road with “fury” (aka intense anger).
One could say that the law was abused, because such an old statute shouldn’t be used to apply to modern-day circumstances. That said, the bicycle was invented in the 1820s or 1830s. But one could also say that having a catch-all law is important to make sure the law doesn’t have any holes.
Returning to American law, it’s important to note that when there is non-specific law, it is up to the legislative body to fill those gaps. But for the same flexibility reasons, Congress or the state or tribal legislatures might want to confer some flexibility on how certain laws are applied. They can imbue “discretion” upon an agency (eg USA Department of Commerce) or to a court (eg Superior Court of California). At other times, they write the law so that “good judgement” must be exercised.
As those terms are used, discretion more-or-less means having a free choice, where either is acceptable but try to keep within reasonable guidelines. Whereas “good judgement” means the guidelines are enforced and there’s much less wiggle-room for arbitraryness. And confusingly so, sometimes there’s both a component of discretion and judgment, which usually means Congress really didn’t know what else to write.
Some examples: a District Attorney anywhere in California has discretion when it comes to filing criminal charges. They could outright choose to not prosecute person A for bank robbery, but proceed with prosecuting person B for bank robbery, even though they were working together on the same robbery. As an elected official, the DA is supposed to weigh the prospects of actually obtaining a guilty verdict, as well as whether such prosecution would be beneficial to the public or a good use of the DA office’s limited time and budget. Is it a bad look when a DA prosecutes one person but not another? Yes. Are there any guardrails? Yes: a DA cannot abuse their discretion by considering disallowed factors, such as a person’s race or other immutable characteristics. But otherwise, the DA has broad discretion, and ultimately it’s the voters that hold the DA to account.
Another example: the USA Environmental Protection Agency’s Administrator is authorized by the federal Clean Air Act to grant a waiver of the supremacy of federal automobile emissions laws, to the state of California. That is to say, federal law on automobile emissions is normally the law of the land and no US State is allowed to write their own laws on automobile emissions. However, because of the smog crisis in the 70/80s, the feds considered that California was a special basket-case and thus needed their own specific laws that were more stringent than federal emissions laws. Thus, California would need to seek a waiver from the EPA to write these more stringent laws, because the blanket rule was “no state can write such laws”. The federal Clean Air Act explicitly says only California can have this waiver, and it must be renewed regularly by the EPA, and that California cannot dip below the federal standards. The final requirement is that the EPA Administrator shall issue the waiver if California requests it, and if they qualify for it.
This means the EPA Administrator does not have discretion, but rather is exercising good judgement: does California’s waiver application satisfy the requirements outlined in the Clean Air Act? If so, the Administrator must issue the waiver. There is no allowance of an “i don’t wanna” reason for non-issuance of the waiver. The Administrator could only refuse if they show that California is somehow trying to do an end-run around the EPA, such as by trying to reduce the standards.
The third question is: do laws encompass all aspects of everything?. No, laws are only what is legally enforced. There are also rules/by-laws and norms. A rule or by-law is often something enforced by something outside the legal system’s purview. For example, the penalty for violating a by-law of the homeowner’s association might be a revocation of access to the common spaces. For a DnD group, the ultimate penalty for violating a rule might be expulsion.
Meanwhile, there are norms which are things that people generally agree on, but felt were so commonplace that breaking the norm would make everything else nonfunctional. For example, there’s a norm that one does not use all-caps lock when writing an online comment, except to represent emphasis or yelling. One could violate that norm with no real repercussions, but everyone else would dislike you for it, they might not want to engage further with you, they might not give you any benefit of the doubt, they may make adverse inferences about you IRL, or other things.
TL;DR: there are unwritten principles that form part of the law, and there’s no way to record all the different non-law rules and social norms that might apply to any particular situation.
- Comment on How does the private equity bubble compare to the AI bubble if at all? 3 weeks ago:
Used for AI, I agree that a faraway, loud, energy-hungry data center comes with a huge host of negatives for the locals, to the point that I’m not sure why they keep getting building approval.
But my point is that in an eventual post-bubble puncture world where AI has its market correction, there will be at least some salvage value in a building that already has power and data connections. A loud, energy-hungry data center can be tamed to be quiet and energy-sipping based on what’s hardware it’s filled in. Remove the GPUs and add some plain servers and that’s a run-of-the-mill data center, the likes of which have been neighbors to urbanites for decades.
I suppose I’d rehash my opinion as such: building new data centers can be wasteful, but I think changing out the workload can do a lot to reduce the impacts (aka harm reduction), making it less like reopening a landfill, and more like rededicating a warehouse. If the building is already standing, there’s no point in tearing it down without cause. Worst case, it becomes climate-controlled paper document storage, which is the least impactful use-case I can imagine.
- Comment on How does the private equity bubble compare to the AI bubble if at all? 3 weeks ago:
Racks/cabinets, fiber optic cables, PDUs, CAT6 (OOBM network), top-of-rack switches, aggregation switches, core switches, core routers, external multi-homed ISP/transit connectivity, megawatt three-phase power feeds from the electric utility, internal power distribution and step-down transformers, physical security and alarm systems, badge access, high-strength raised floor, plenum spaces for hot/cold aisles, massive chiller units.
- Comment on How does the private equity bubble compare to the AI bubble if at all? 3 weeks ago:
Absolutely, yes. I didn’t want to elongate my comment further, but one odd benefit of the Dot Com bubble collapsing was all of the dark fibre optic cable laid in the ground. Those would later be lit up, to provide additional bandwidth or private circuits, and some even became fibre to the home, since some municipalities ended up owning the fibre network.
In a strange twist, the company that produced a lot of this fibre optic cable and went bankrupt during the bubble pop – Corning Glass – would later become instrumental in another boom, because their glass expertise meant they knew how to produce durable smartphone screens. They are the maker of Gorilla Glass.
- Comment on How does the private equity bubble compare to the AI bubble if at all? 3 weeks ago:
I’m not going to come running to the defense of private equity (PE) firms, but compared to so-called AI companies, the PE firms are at least building tangible things that have an ostensible alternative use. A physical data center building – even one located far away from the typical metropolitan area that have better connectivity to the world’s fibre networks – will still be an asset with some utility, when/if the AI bubble pops.
In that scenario, the PE firm would certainly take a haircut on their investment, but they’d still get something because an already-built data center will sell for some non-zero price, with possible buyers being the conventional, non-AI companies that just happen to need some cheap rack space. Looking at the AI companies though, what assets do they have which carry some intrinsic value?
It is often said that during the California Gold Rush, the richest people were not those which staked our the best gold mines, but those who sold pickaxes to miners. So too would PE firms pivot to whatever comes next, selling their remaining interest from the prior hype cycle and moving to the next.
I’ve opined before that because no one knows when the bubble with burst, it is simultaneously financially dangerous to: 1) invest into that market segment, but also 2) to exit from that market segment. And so if a PE firm is already bet most of the farm, then they might just have to follow through with it and pray for the best.
- Comment on Do we have enough supra conductor to support quantum computing growth? 3 weeks ago:
I presume we’re talking about superconductors; I don’t know what a supra (?) conductor would be.
There are two questions here: 1) how much superconducting materials are required for today’s state-of-the-art quantum computers , and 2) how quantum computers would be commercialized. The first deals in material science and whether more-capable superconductors can be developed at scale, ideally for room-temperature and thus wouldn’t require liquid helium. Even a plentiful superconductor that merely requires merely liquid nitrogen would he a bit improvement.
But the second question is probably the limiting factor, because although quantum computers are billed as the next iteration of computing, the fact of the matter is that “classical” computers will still be able to do most workloads faster than quantum computers, today and well into the future.
The reality is that quantum computers excel at only a specific subset of computational tasks, which classically might require mass parallelism. For example, breaking encryption algorithms is one such task, but even applying Shoe’s Algorithm optimally, the speed-up is a square-root factor. That is to say, if a cryptographic algorithm would need 2^128 operations to brute-force on a classical computer, then an optimal quantum computer would only need 2^64 quantum operation. If quantum computers achieve the equivalent performance of today’s classical computers, then 2^64 is achievable, so that cryptographic algorithm is broken.
If. And it’s kinda easy to see how to avoid this problem: use “bigger” cryptographic algorithms. So what would quantum computers be commercialized for? Quite frankly, I have no idea: until such commonly-available quantum computers are available, and there is a workload which classical computers cannot reasonably do, then there won’t be a marker for quantum computers.
If I had to guess, I imagine that graph theorists will like quantum computers, because graphs can increase in complexity really fast on classical machines, but is more tame on quantum computers. But the only commercial applications from that would be for social media (eg Facebook hires a lot of graph theorists) and surveillance (finding correlations in masses of data). Uh, those are not wide markets, although they would have deep pockets to pay for experimental quantum computers.
So uh, not much that would benefit the average person.
- Comment on If your federal government cut internet access to your whole town then where in your town would you think that "the people" would get together to protest ? 3 weeks ago:
If the town is Bielefeld in Germany, then at the Old Market square. But that city doesn’t exist.
- Comment on Why don't compasses have just two Cardinal directions (North, East, -North, -East)? 3 weeks ago:
I see my typo, and it’s too funny and I’m just going to roll with it haha
- Comment on Why don't compasses have just two Cardinal directions (North, East, -North, -East)? 3 weeks ago:
As a practical matter, relative directions are already hard enough, where I might say that Colorado is east of California, and California is west of Colorado.
To use +/- East would mean there’s now just a single symbol difference between relative directions. California bring -East of Folorado, and Colorado being +East of California.
Also, we need not forget that the conventional meridian used for Earth navigation is centered on Greenwich in the UK, and is a holdover from the colonial era where Europe is put front-and-center on a map and everything else is “free real estate”. Perhaps if the New World didn’t exist, we would have right-ascension based system where Greenwich is still 0-deg East and Asia is almost 160-deg East. Why would colonialists center the maps on anywhere but themselves?
- Comment on Are people with High functioning autism allowed to become police officers? 3 weeks ago:
Assuming this is in the USA, I want to note that there are many other available jobs in the protective services occupation, that can be public or private sector, that face the general public (or not), and that don’t have any particular positive or negative connotation attached to the job, even after hours.
The Bureau of Labor Statistics (BLS) has a fantastic reference for available occupations:
- Comment on What's the best way to answer someone who accuses you of being a bot because they don't like what you have to say? 4 weeks ago:
Block, ignore, and continue living your non-bot life.
- Comment on Who shops at small businesses? 4 weeks ago:
Restaurants (including franchises of chains) are indeed a major segment of small businesses. Looking more broadly, any industry which: 1) offers a service/product/utility, and 2) has proven to not have a tendency to inflate beyond its fundamental target audience, those are likely to be small businesses. Those are the parameters which stave off any sort of corporate takeovers and consolidations, because they won’t invest in a small business if the prospect of infinite growth isn’t there. So the business stays small. And small is often perfectly fine.
That is to say, restaurants (humans can only eat so much food), bicycle stores (humans can only ride so much per day), and local produce shops (even in the Central Valley of California, there’s only so much produce to sell, and humans can’t eat infinite quantities) have these qualities.
But compare those to a restaurant supply warehouse or music equipment store, since those items can be shipped and need no customization by the end user. Consolidation and corporate meddling is possible and probable.
Then you have industries which are often local and small but are prone to financial hazards, such as real estate agents and used car lenders. Because they get paid as a percentage of the transaction size, if the price of houses or cars go up in an unchecked fashion, the profit margins also increase linearly, which makes them more tempting for corporate involvement.
There are corporate-owned national chains of real estate agents, self storage, department stores, and payday loan offices. But I’m not aware of a national chain for bicycle or bicycle accessories. Even regional chains for bicycles are few and far between. Some consolidation has happened there, but by most definitions, a bicycle shop is very much a small business.
- Comment on Is there a word for when someone is not capable of, or doesn't try to understand verbal communication in a language, they are fluent in similar to functionally illiterate but for speech? 5 weeks ago:
It might not be used frequently, but perhaps “incomprehension”?
- Comment on Is there a practical reason data centers have to sprawl outward instead of upward? 1 month ago:
In the past, we did have a need for purpose-built skyscrapers meant to house dense racks of electronic machines, but it wasn’t for data centers. No, it was for telephone equipment. See the AT&T Long Lines building in NYC, a windowless monolith of a structure on Lower Manhattan. It stands at 170 meters (550 ft).
This NYC example shows that it’s entirely possible for telephone equipment to build up, and was very necessary considering the cost of real estate in that city. But if we look at the difference between a telephone exchange and a data center, we quickly realize why the latter can’t practically achieve skyscraper heights.
Data centers consume enormous amounts of electric power, and this produces a near-equivalent amount of heat. The chiller units for a data center are themselves estimated to consume something around a quarter of the site’s power consumption, to dissipate the heat energy of the computing equipment. For a data center that’s a few stories tall, the heat density per land area is enough that a roof-top chiller can cool it. But if the data center grows taller, it has a lower ratio of rooftop to interior volume.
This is not unlike the ratio of surface area to interior volume, which is a limiting factor for how large (or small) animals can be, before they overheat themselves. So even if we could mount chiller units up the sides of a building – which we can’t, because heat from the lower unit would affect an upper unit – we still have this problem of too much heat in a limited land area.
- Comment on Why do languages sometimes have letters which don't have consistent pronunciations? 1 month ago:
The French certainly benefitted from the earlier Jesuit work, although the French did do their own attempts at “westernizing” parts of the language. I understand that today in Vietnam, the main train station in Hanoi is called “Ga Hà Nộ”, where “ga” comes from the French “gare”, meaning train station (eg Gare du Nord in Paris). This kinda makes sense since the French would have been around when railways were introduced in the 19th Century.
Another example is what is referred to in English as the “Gulf of Tonkin incident”, referring to the waters off the coast of north Vietnam. Here, Tonkin comes from the French transliteration of Đông Kinh (東京), which literally means “eastern capital”.
So far as I’m aware, English nor French don’t use the name Tonkin (it’s very colonialism-coded), and modern Vietnamese calls those waters by a different name anyway. There’s also another problem: that name is already in-use by something else, being the Tokyo metropolis in Japan.
In Japanese, Tokyo is written as 東京 (eastern capital) in reference to it being east of the cultural and historical seat of the Japanese Emperor in Kyoto (京都, meaning “capital metropolis”). Although most Vietnamese speakers would just say “Tokyo” to refer to the city in Japan, if someone did say “Đông Kinh”, people are more likely to think of Tokyo (or have no clue) than to think of an old bit of French colonial history. These sorts of homophones exist between the CJKV languages all the time.
And to wrap up the fun fact, if Tokyo is the most well-known “eastern capital” when considering the characters in the CJKV language s, we also have the northern capital (北京, Beijing, or formerly “Peking”) and the southern capital (南京, Nanjing). There is no real consensus on where the “western capital” is.
Vietnamese speakers will in-fact say Bắc Kinh when referring to the Chinese capital city, and I’m not totally sure why it’s an exception like that. Then again, some newspapers will also print the capital city of the USA as Hoa Thịnh Đốn (華盛頓) rather than “Washington, DC”, because that’s how the Chinese wrote it down first, and then brought to Vietnamese, and then changed to the modern script. To be abundantly clear, it shouldn’t be surprising to have a progression from something like “Wa-shing-ton” to "hua-shen-dun’ to “hoa-thinh-don”.
- Comment on Why do languages sometimes have letters which don't have consistent pronunciations? 1 month ago:
As a case study, I think Vietnamese is especially apt to show how the written language develops in parallel and sometimes at odds with the spoken language. The current alphabetical script of Vietnamese was only adopted for general use in the late 19th Century, in order to improve literacy. Before that, the grand majority of Vietnamese written works were in a logographic system based on Chinese characters, but with extra Vietnamese-specific characters that conveyed how the Vietnamese would pronounce those words.
The result was that Vietnamese scholars pre-20th Century basically had to learn most of the Chinese characters and their Cantonese pronunciations (not Mandarin, since that’s the dialect that’s geographically father away), and then memorize how they are supposed to be read in Vietnamese, then compounded by characters that sort-of convey hints about the pronunciation. This is akin to writing a whole English essay using Japanese katakana.
Also, the modern Vietnamese script is a work of Portuguese Jesuit scholars, who were interested in rendering the Vietnamese language into a more familiar script that could be read phonetically, so that words are pronounced letter-by-letter. That process, however faithful they could manage it, necessarily obliterates some nuance that a logographic language can convey. For example, the word bầu can mean either a gourd or to be pregnant. But in the old script, no one would confuse 匏 (gourd) with 保 (to protect; pregnant) in the written form, even though the spoken form requires context to distinguish the two.
Some Vietnamese words were also imported into the language from elsewhere, having not previously existed in spoken Vietnamese. So the pronunciation would hew closer to the origin pronunciation, and then to preserve the lineage of where the pronunciation came from, the written word might also be written slightly different. For example, nhôm (meaning aluminum) draws from the last syllable of how the French pronounce aluminum. Loanwords – and there are many in Vietnamese, going back centuries – will mess up the writing system too.
- Comment on Noob RAM speed question 1 month ago:
I’m not a computer engineer, but I did write this comment for a question on computer architecture. At the very onset, we should clarify that RAM capacity (# of GBs) and clock rate (aka frequency; eg 3200 MHz) are two entirely different quantities, and generally can not be used to compensate for the other. It is akin to trying to halve an automobile’s fuel tank in order to double the top-speed of the car.
Since your question is about performance, we have to look at both the technical impacts to the system (primarily from reduced clock rate) and then also the perceptual changes (due to having more RAM capacity). Only by considering both together can be arrive as some sort of coherent answer.
You’ve described your current PC as having an 8 GB stick of DDR4 3200 MHz. This means that the memory controller in your CPU (pre-DDR4 era CPUs would have put the memory controller on the motherboard) is driving the RAM at 3200 MHz. A single clock cycle is a square wave that goes up and then goes down. DDR stands for “Double Data Rate”, and means that a group of bits (called a transaction) are sent on both the up and the down of that single clock cycle. So 3200 MHz means the memory is capable of moving 6400 million transactions per second (6400 MT/s). For this reason, 3200 MHz DDR4 is also advertised as DDR4-6400.
Some background about DDR versus other RAM types, when used in PCs: the DDR DIMMs (aka sticks) are typically made of 8 visually-distinct chips on each side of the DIMM, although some ECC-capable DIMMs will have 9 chips. These are the small black boxes that you can see, but they might be underneath the DIMM’s heatsink, if it has one. The total capacity of these sixteen chips on your existing stick is 8 GB, so each chip should be 512 MB. A rudimentary way to store data would be for the first 512 MB to be stored in the first chip, then the next 512 MB in the second chips, and so on. But DDR DIMMs do a clever trick to increase performance: the data is “striped” across all 8 or 16 chips. That is, to retrieve a single Byte (8 bits), the eight chips on one face of the DIMM are instructed to return their stored bit, and the memory controller composes these into a single Byte to send to the CPU. This all happens in the time of a single transaction.
We can actually do that on both sides of the DIMM, so two Bytes could be retrieved at once. This is known as dual-rank memory. But why should each chip only return a single bit? What if each chip could return 4 bits at a time? If all sixteen chips support this 4-bit quantity (known as memory banks), we would get 64 bits (8 Bytes), still in the same time as a single transaction. Compare to earlier where we didn’t stripe the bits across all sixteen chips: it would have taken 16 times longer for one chip to return what 16 chips can return in parallel. Free performance!
But why am I mentioning these engineering details, which has already been built into the DIMM you already have? The reason is that it’s the necessary background to explain the next DDR hat-trick for memory performance: multi-channel memory. The most common is dual channel memory, and I’ll let this “DDR4 for Dummies” quote explain:
A memory channel refers to DIMM slots tied to the same wires on the CPU. Multiple memory channels allow for faster operation, theoretically allowing memory operations to be up to four times as fast. Dual channel architecture with 64-bit systems provides a 128-bit data path. Memory is installed in banks, and you have to follow a couple of rules to optimize performance.
Basically, dual-channel is kinda like having two memory controllers for the CPU, each driving half of the DDR in the system. On an example system with two 1 GB sticks of RAM, we could have each channel driving a single stick. A rudimentary use would be if the first 1 GB of RAM came from channel 1, and then the second 1 GB came from channel 2. But from what we saw earlier with dual-rank memory, this is leaving performance on the table. Instead, we should stripe/interlace memory accesses across both channels, so that each stick of RAM returns 8 Bytes, for a total of 16 Bytes in the time of a single transaction.
So now let’s answer the technical aspect of you question. If your system supports dual-channel memory, and you install that second DIMM into the correct slot to make use of that feature, then in theory, memory accesses should double in capacity, because of striping the access across two independent channels. The downside is that for that whole striping thing to work, all channels must be running at the same speed, or else one channel would return data too late. Since you have an existing 3200 MHz stick but the new stick would be 2400 MHz, the only thing the memory controller can do is to run the existing stick at the lower speed of 2400 MHz. Rough math says that the existing stick is now operating at only 66% of its performance, but from the doubling of capacity, that might lead to 133% of performance. So still a net gain, but less than ideal.
The perceptual impact has to do with how a machine might behave now that it has 16 GB of memory, having increased from 8 GB. If you were only doing word processing, your existing 8 GB might not have been fully utilized, with the OS basically holding onto it. But if instead you had 50 browser tabs open, then your 8 GB of RAM might have been entirely utilized, with the OS having to shuffle memory onto your hard drive or SSD. This is because those unused tabs still consume memory, despite not actively in front of you. In some very extreme cases, this “thrashing” causes the system to slow to a crawl, because the shuffling effort is taking up most of the RAM’s bandwidth. If increasing from 8 GB to 16 GB would prevent thrashing, then the computer would overall feel faster than before, and that’s on top of the theoretical 33% performance gain from earlier.
Overall, it’s not ideal to mix DDR speeds, but if the memory controller can drive all DIMMs at the highest common clock speed and with multi-channel memory, then you should still get a modest boost in technical performance, and possibly a boost in perceived performance. But I would strongly recommend matched-speed DDR, if you can.
- Comment on Is it insane to run a home server on an old laptop instead of a Raspberry Pi for self-hosting - what do I need to worry about? 1 month ago:
Overall, it looks like you’re done your homework, covering the major concerns. What I would add is that keeping an RPi cool is a consideration, since without even a tiny heatsink, the main chip gets awfully hot. Active cooling with a fan should be considered to prevent thermal throttling.
The same can apply to a laptop, since the intended use-case is with the monitor open and with the machine perched upon a flat and level surface. But they already have automatic thermal control, so the need for supplemental cooling is not very big.
Also, it looks like you’ve already considered an OS. But for other people’s reference, an old x86 laptop (hopefully newer than i686) has a huge realm of potential OS’s, including all the major *BSD distros. Whereas I think only Ubuntu, Debian, and Raspbian are the major OS’s targeting the RPi.
One last thing in favor of choosing laptop: using what you have on hand is good economics and reduces premature ewaste, as well as formenting the can-do attitude that’s common to self hosting (see: !selfhosted@lemmy.world).
TL;DR: not insane
- Comment on How do you beat post-work floppiness? 1 month ago:
At the very minimum, gym in the morning (but after coffee/caffeine, plus the time for it to kick in) is the enlightened way. It helps if your gym is nearby or you have a !homegym@lemmy.world .
I personally also use the wee morning hours to reconcile my financial accounts, since ACH transactions in the USA will generally process a day faster if submitted before 10:30 ET.
- Comment on Are physical mail generally not under surveillance? If everyone suddently ditched electronic communications and start writing letters, would governments be able to practically surveil everyone? 1 month ago:
The photos taken by the sorting machines are of the outside of the envelope, and are necessary in order to perform OCR of the destination address and to verify postage. There is no general mechanism to photograph the contents of mailpieces, and given how enormous the operations of the postal service is, casting a wide surveillance net to capture the contents of mailpieces is simply impractical before someone eventually spilled the beans.
That said, what you describe is a method of investigation known as mail cover, where the useful info from the outside of a recipient’s mail can be useful. For example, getting lots of mail from a huge number domestic addresses in plain envelopes, the sort that victims of remittance fraud would have on hand, could be a sign that the recipient is laundering fraudulent money. Alternatively, sometimes the envelope used by the sender is so thin that the outside photo accidentally reveals the contents. This is no different than holding up an envelope to the sunlight and looking through it. Obvious data is obvious to observe.
In electronic surveillance (a la NSA), looking at just the outside of an envelope is akin to recording only the metadata of an encrypted messaging app. No, you can’t read the messages, but seeing that someone received a 20 MB message could indicate a video, whereas 2 KB might just be one message in a rapid convo.
- Comment on Why do all text LLMs, no matter how censored they are or what company made them, all have the same quirks and use the slop names and expressions? 1 month ago:
So no, no billion dollar company can make their own training data
This statement brought along with it the terrifying thought that there’s a dystopian alternative timeline where companies do make their own training data, by commissioning untold numbers of scientists, engineers, artists, researchers, and other specialties to undertake work that no one else has. But rather than trying to further the sum of human knowledge, or even directly commercializing the fruits of that research, that it’s all just fodder to throw into the LLM training set. A world where knowledge is not only gatekept like Elsevier but it isn’t even accessible by humans: only the LLM will get to read it and digest it for human consumption.
Written by humans, read by AI, spoonfed to humans. My god, what an awful world that would be.
- Comment on Why is it called "overseas" even if a dispora population move to a place connected by land? 1 month ago:
A few factors:
- Human population centers historically were built by natural waterways and/or by the sea, to enable access to trade, seafood, and obviously, water for drinking and agriculture
- When the fastest mode of land transport is a horse (ie no railways or automobiles), the long-distance roads between nations which existed up to the 1700s were generally unimproved and dangerous, both from the risk of breakdown but also highway robbery. Short-distance roads made for excellent invasion routes for an army, and so those tended to fall under control of the same nation.
- Water transport was (and still is) capable of moving large quantities of tonnage, and so was the predominant form of trade, only seeing competition when land transport improved and air transport was introduced.
So going back centuries when all the “local” roads are still within the same country (due to conquest), and all the long-distance roads were treacherous, slow, and usually uncomfortable (ie dysentery on the Oregon Trail), the most obvious way to get to another country would have been to get a ride on a trading ship. An island nation would certain regard all other countries as being “overseas”, but so would an insular nation hemmed in by mountains but sitting directly on the sea.
TL;DR: for most of human history, other countries were most reasonably reached by sea. Hence “overseas”.