Don’t Canadian insurance companies want to know where their customers are? Or are the privacy safeguards good there?
In the US, banks and insurance companies snoop on their customers to track their whereabouts. They insert surreptitious tracker pixels in email to not only track the fact that you read their msg but also when you read the msg and your IP (which gives whereabouts). If they suspect you are not where they expect you to be, they take action. It’s perfectly legal in the US to use that sneaky underhanded technique rather than the transparent mechanism described in RFC 2298. If your suppliers are using RFC 2298, lucky you.
Your assertion that the document is malicious without any evidence is what I’m concerned about.
At some point you have to decide to trust someone. The comment above gave you reason to trust that the document was in a standard, non-malicious format. But you outright rejected their advice in a hostile tone. You base your hostility on a youtube video.
You should read the essay “on trusting trust” and then make a decision on whether you are going to participate in digital society or live under a bridge with a tinfoil hat.
In Canada, and elsewhere, insurance companies know everything about you before you even apply, and it’s likely true elsewhere too. Even if they don’t have personally identifiable information, you’ll be in a data bucket with your neighbours, with risk profiles based on neighbourhood, items being insuring, claim rates for people with similar profiles, etc. Very likely every interaction you have with them has been going into a LLM even prior to the advent of ChatGPT, and they will have scored those interactions against a model.
The personally identifiable information has largely been anonymized in these models. In Canada, for example, there are regulatory bodies like OSFI that they have to report to, and get audited by, to ensure the data is being used in compliance with regulations. Each company will have a compliance department tasked with making sure they’re adhering.
But what you will end up doing instead is triggering fraudulent behaviour flags. There’s something called “address fraud”, where people go out of their way to disguise their location, because some lower risk address has better rates or whatever. When you do everything you can to scrub your location, this itself is a signal that you are operating as a highly paranoid individual and that might put you in a bucket. If you want to be the most invisible to them, you want to act like you’re in the median of all categories. Because any outlying behaviours further fingerprint you.
Source: I have a direct connection to advanced analytics within insurance industry (one degree of separation).
Your assertion that the document is malicious without any evidence is what I’m concerned about.
I did not assert malice. I asked questions. I’m open to evidence proving or disproving malice.
At some point you have to decide to trust someone. The comment above gave you reason to trust that the document was in a standard, non-malicious format. But you outright rejected their advice in a hostile tone. You base your hostility on a youtube video.
There was too much uncertainty there to inspire trust. Getoffmylan had no idea why the data was organised as serialised java.
You should read the essay “on trusting trust” and then make a decision on whether you are going to participate in digital society or live under a bridge with a tinfoil hat.
I’ll need a more direct reference because that phrase gives copious references. Do you mean this study? Judging from the abstract:
To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.
I seem to have received software pretending to be a document. Trust would naturally not be a sensible reaction to that. In the infosec discipline we would be incompetent fools to loosely trust whatever comes at us. We make it a point to avoid trust and when trust cannot be avoided to demand justfiication. We have a zero-trust principle. We also have the rule of leaste privilige which means not to extend trust where it’s not necessary for the mission. Why would I trust a PDF when I can take steps to access the PDF in a way that does not need excessive trust?
In Canada, and elsewhere, insurance companies know everything about you before you even apply, and it’s likely true elsewhere too.
When you move, how do the find out if you don’t tell them? Tracking would be one way.
Privacy is about control. When you call it paranoia, the concept of agency has escaped you. If you have privacy, you can choose what you disclose. What would be good rationale for giving up control?
Even if they don’t have personally identifiable information, you’ll be in a data bucket with your neighbours, with risk profiles based on neighbourhood, items being insuring, claim rates for people with similar profiles, etc. Very likely every interaction you have with them has been going into a LLM even prior to the advent of ChatGPT, and they will have scored those interactions against a model.
If we assume that’s true, what do you gain by giving them more solid data to reinforce surreptitious snooping? You can’t control everything but It’s not in your interest to sacrifice control for nothing.
But what you will end up doing instead is triggering fraudulent behaviour flags. There’s something called “address fraud”, where people go out of their way to disguise their location, because some lower risk address has better rates or whatever.
Indeed for some types of insurance policies the insurer has a legitimate need to know where you reside. But that’s the insurers problem. This does not rationalize a consumer who recklessly feeds surreptitious surveillance. Street wise consumers protect themselves of surveillance. Of course they can (and should) disclose their new address if they move via proper channels.
Why? Because someone might take a vacation somewhere and interact from another state. How long is a vacation? It’s for the consumer to declare where they intend to live, e.g. via “declaration of domicile”.
When you do everything you can to scrub your location, this itself is a signal that you are operating as a highly paranoid individual and that might put you in a bucket.
Sure, you could end up in that bucket if you are in a strong minority of street wise consumers. If the insurer wants to waste their time chasing false positives, the time waste is on them. I would rather laugh at that than join the street unwise club that makes the street wise consumers stand out more.
evenwicht@lemmy.sdf.org 1 week ago
Don’t Canadian insurance companies want to know where their customers are? Or are the privacy safeguards good there?
In the US, banks and insurance companies snoop on their customers to track their whereabouts. They insert surreptitious tracker pixels in email to not only track the fact that you read their msg but also when you read the msg and your IP (which gives whereabouts). If they suspect you are not where they expect you to be, they take action. It’s perfectly legal in the US to use that sneaky underhanded technique rather than the transparent mechanism described in RFC 2298. If your suppliers are using RFC 2298, lucky you.
troyunrau@lemmy.ca 1 week ago
Your assertion that the document is malicious without any evidence is what I’m concerned about.
At some point you have to decide to trust someone. The comment above gave you reason to trust that the document was in a standard, non-malicious format. But you outright rejected their advice in a hostile tone. You base your hostility on a youtube video.
You should read the essay “on trusting trust” and then make a decision on whether you are going to participate in digital society or live under a bridge with a tinfoil hat.
In Canada, and elsewhere, insurance companies know everything about you before you even apply, and it’s likely true elsewhere too. Even if they don’t have personally identifiable information, you’ll be in a data bucket with your neighbours, with risk profiles based on neighbourhood, items being insuring, claim rates for people with similar profiles, etc. Very likely every interaction you have with them has been going into a LLM even prior to the advent of ChatGPT, and they will have scored those interactions against a model.
The personally identifiable information has largely been anonymized in these models. In Canada, for example, there are regulatory bodies like OSFI that they have to report to, and get audited by, to ensure the data is being used in compliance with regulations. Each company will have a compliance department tasked with making sure they’re adhering.
But what you will end up doing instead is triggering fraudulent behaviour flags. There’s something called “address fraud”, where people go out of their way to disguise their location, because some lower risk address has better rates or whatever. When you do everything you can to scrub your location, this itself is a signal that you are operating as a highly paranoid individual and that might put you in a bucket. If you want to be the most invisible to them, you want to act like you’re in the median of all categories. Because any outlying behaviours further fingerprint you.
Source: I have a direct connection to advanced analytics within insurance industry (one degree of separation).
BearOfaTime@lemm.ee 1 week ago
This tells me all we need to know about you.
You’re an apologist for these companies. It’s been repeatedly demonstrated such anonymizatuin can be pretty easily reversed.
evenwicht@lemmy.sdf.org 1 week ago
I did not assert malice. I asked questions. I’m open to evidence proving or disproving malice.
There was too much uncertainty there to inspire trust. Getoffmylan had no idea why the data was organised as serialised java.
I’ll need a more direct reference because that phrase gives copious references. Do you mean this study? Judging from the abstract:
I seem to have received software pretending to be a document. Trust would naturally not be a sensible reaction to that. In the infosec discipline we would be incompetent fools to loosely trust whatever comes at us. We make it a point to avoid trust and when trust cannot be avoided to demand justfiication. We have a zero-trust principle. We also have the rule of leaste privilige which means not to extend trust where it’s not necessary for the mission. Why would I trust a PDF when I can take steps to access the PDF in a way that does not need excessive trust?
When you move, how do the find out if you don’t tell them? Tracking would be one way.
Privacy is about control. When you call it paranoia, the concept of agency has escaped you. If you have privacy, you can choose what you disclose. What would be good rationale for giving up control?
If we assume that’s true, what do you gain by giving them more solid data to reinforce surreptitious snooping? You can’t control everything but It’s not in your interest to sacrifice control for nothing.
Indeed for some types of insurance policies the insurer has a legitimate need to know where you reside. But that’s the insurers problem. This does not rationalize a consumer who recklessly feeds surreptitious surveillance. Street wise consumers protect themselves of surveillance. Of course they can (and should) disclose their new address if they move via proper channels.
Why? Because someone might take a vacation somewhere and interact from another state. How long is a vacation? It’s for the consumer to declare where they intend to live, e.g. via “declaration of domicile”.
Sure, you could end up in that bucket if you are in a strong minority of street wise consumers. If the insurer wants to waste their time chasing false positives, the time waste is on them. I would rather laugh at that than join the street unwise club that makes the street wise consumers stand out more.