FreshRSS

๐Ÿ”’
โŒ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

What do AI chatbots know about us, and who are they sharing it with?

AI Chatbots are relatively old by tech standards, but the newest crop โ€” led by OpenAI's ChatGPT and Google's Bard โ€” are vastly more capable than their ancestors, not always for positive reasons. The recent explosion in AI development has already created concerns around misinformation, disinformation, plagiarism and machine-generated malware. What problems might generative AI pose for the privacy of the average internet user? The answer, according to experts, is largely a matter of how these bots are trained and how much we plan to interact with them

In order to replicate human-like interactions, AI chatbots are trained on mass amounts of data, a significant portion of which is derived from repositories like Common Crawl. As the name suggests, Common Crawl has amassed years and petabytes worth of data simply from crawling and scraping the open web. โ€œThese models are training on large data sets of publicly available data on the internet,โ€ Megha Srivastava, PhD student at Stanford's computer science department and former AI resident with Microsoft Research, said. Even though ChatGPT and Bard use what they call a "filtered" portion of Common Crawl's data, the sheer size of the model makes it "impossible for anyone to kind of look through the data and sanitize it,โ€ according to Srivastava.

Either through your own carelessness or the poor security practices by a third-party could be in some far-flung corner of the internet right now. Even though it might be difficult to access for the average user, it's possible that information was scraped into a training set, and could be regurgitated by that chatbot down the line. And a bot spitting out someone's actual contact information is in no way a theoretical concern. Bloomberg columnist Dave Lee posted on Twitter that, when someone asked ChatGPT to chat on encrypted messaging platform Signal, it provided his exact phone number. This sort of interaction is likely an edge case, but the information these learning models have access to is still worth considering. "Itโ€™s unlikely that OpenAI would want to collect specific information like healthcare data and attribute it to individuals in order to train its models," David Hoelzer, a fellow at security organization the SANS Institute, told Engadget. โ€œBut could it inadvertently be in there? Absolutely.โ€

Open AI, the company behind ChatGPT, did not respond when we asked what measures it takes to protect data privacy, or how it handles personally identifiable information that may be scraped into its training sets. So we did the next best thing and asked ChatGPT itself. It told us that it is "programmed to follow ethical and legal standards that protect usersโ€™ privacy and personal information" and that it doesn't "have access to personal information unless it is provided to me." Google for its part told Engadget it programmed similar guardrails into Bard to prevent the sharing of personally identifiable information during conversations.

Helpfully, ChatGPT brought up the second major vector by which generative AI might pose a privacy risk: usage of the software itself โ€” either via information shared directly in chatlogs or device and user information captured by the service during use. OpenAIโ€™s privacy policy cites several categories of standard information it collects on users, which could be identifiable, and upon starting it up, ChatGPT does caution that conversations may be reviewed by its AI trainers to improve systems.ย 

Google's Bard, meanwhile, does not have a standalone privacy policy, instead uses the blanket privacy document shared by other Google products (and which happens to be tremendously broad.) Conversations with Bard don't have to be saved to the user's Google account, and users can delete the conversations via Google, the company told Engadget. โ€œIn order to build and sustain user trust, they're going to have to be very transparent around privacy policies and data protection procedures at the front end,โ€ Rishi Jaitly, professor and distinguished humanities fellow at Virginia Tech, told Engadget.

Despite having a "clear conversations" action, pressing that does not actually delete your data, according to the serviceโ€™s FAQ page, nor is OpenAI is able to delete specific prompts. While the company discourages users from sharing anything sensitive, seemingly the only way to remove personally identifying information provided to ChatGPT is to delete your account, which the company says will permanently remove all associated data.

Hoelzer told Engadget heโ€™s not worried that ChatGPT is ingesting individual conversations in order to learn. But that conversation data is being stored somewhere, and so its security becomes a reasonable concern. Incidentally, ChatGPT was taken offline briefly in March because a programming error revealed information about usersโ€™ chat histories. It's unclear this early in their broad deployment if chat logs from these sorts of AI will become valuable targets for malicious actors.

For the foreseeable future, it's best to treat these sorts of chatbots with the same suspicion users should be treating any other tech product. โ€œA user playing with these models should enter with expectation that any interaction they're having with the model," Srivastava told Engadget, "it's fair game for Open AI or any of these other companies to use for their benefit.โ€

This article originally appeared on Engadget at https://www.engadget.com/what-do-ai-chatbots-know-about-us-and-who-are-they-sharing-it-with-140013949.html?src=rss

TECH-CHATGPT/

A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023. REUTERS/Dado Ruvic/Illustration

Eight months post-Roe, reproductive-health privacy is still messy

Data privacy awareness boomed last June when the Supreme Court overturned Roe v. Wade, limiting access to safe, legal abortion. Now, eight months later, privacy experts say not to let your guard down. Legislative bodies have made little progress on health data security.

We give up so much data each day that itโ€™s easy to tune out. We blindly accept permissions or turn on location sharing, but that data can also be used by governing bodies to prosecute civilians or by attackers looking to extort individuals. Thatโ€™s why, when SCOTUS declared access to abortion would no longer be a constitutional right, people began to scrutinize the amount of private health data they were sending to reproductive-health apps.

โ€œThe burden is really on consumers to figure out how a company, an app, a website is going to collect and then potentially use and share their data,โ€ Andrew Crawford, senior counsel, privacy and data, at the Center for Democracy and Technology said.

There arenโ€™t widespread industry standards or federal legislation to protect sensitive data, despite some increased regulatory action since last year. Even data that isnโ€™t considered personally identifiable or explicitly health related can still put people at risk. Location data, for example, can show if a patient traveled to receive an abortion, possibly putting them at risk of prosecution.

โ€œCompanies see that as information they can use to make money,โ€ Jen Caltrider, lead at Mozillaโ€™s consumer privacy organization Privacy Not Included, told Engadget. Research released by Caltriderโ€™s team in August analyzed the security of 25 reproductive-health apps. Eighteen of them earned a privacy warning label for failing to meet privacy standards.

So, whatโ€™s left for users of reproductive-health apps to do? The obvious advice is to carefully read the terms and conditions before signing up in order to better understand whatโ€™s happening with their data. If you donโ€™t have a legal degree and an hour to spare, though, there are some basic rules to follow. Turning off data sharing that isnโ€™t necessary to the function of the app, using encrypted chats to talk about reproductive care, signing up for a trustworthy VPN and leaving your phone at home if youโ€™re accessing reproductive health care can all help protect your information, according to Crawford.

While industry standards are still lacking, increased public scrutiny has led to some improvements. Some reproductive-health apps now store data locally as opposed to on a server, collect data anonymously so that it cannot be accessed by law enforcement or base operations in places like Europe that have stronger data privacy laws. We spoke with three popular apps that were given warning labels by Privacy Not Included last August to see whatโ€™s changed since then.

Glowโ€™s Eve reproductive-health app introduced an option to store data locally instead of on its server, among other security measures. Glow told Engadget that it doesn't sell data and employees are required to take privacy and security training.

A similar app, Flo Health, has introduced an anonymous mode and hired a new privacy exec since the report. The company told Engadget that it hopes to expand its anonymous mode features in the future with additions like the ability to stop receiving IP addresses completely.

Clue, another app that landed on the warning list, adheres to the stricter privacy laws of the European Union known as General Data Protection Regulation, co-CEO Carrie Walter told Engadget. She added that the company will never cooperate with a government authority to use peopleโ€™s health data against them, and recommended users keep up with updates to its privacy policy for more information.

But there are no one-and-done solutions. With permissions changing frequently, people that use health apps are also signing up to consistently check their settings.

โ€œApps change constantly, so keep doing your research, which is a burden to ask consumers,โ€ Caltrider said. โ€œUse anonymous modes, when they're available, store things locally, as much as you can. Don't share location if you can opt out of location sharing.โ€

This article originally appeared on Engadget at https://www.engadget.com/eight-months-post-roe-reproductive-health-privacy-is-still-messy-160058529.html?src=rss

SPAIN-POLITICS/WOMEN

Raquel del Rio, 36, who works in police forces, poses as she observes a period calendar tracker app on her mobile phone at her home in Madrid, Spain, May 16, 2022. Picture taken May 16, 2022. REUTERS/Isabel Infantes

Apple is convinced my dog is stalking me

As far as I know, no one is using an Apple AirTag to stalk me. But if that were to change, Iโ€™m not even sure Iโ€™d notice Appleโ€™s attempts to warn me. The โ€œAirTag Found Moving With Youโ€ notification near-constantly sits on my homescreen, and Iโ€™ve gotten used to quickly swiping it away.

But Iโ€™m getting ahead of myself โ€“ let me tell you about my dog, Rosie. Sheโ€™s a sweet tempered, mild mannered rescue. Still, there was one catch when we adopted her: Sheโ€™s a flight risk.

Weโ€™ve seen this firsthand when the sound of fireworks or a strong wind causes her to enter a full-blown panic. Rosie shuts down, shakes and, when itโ€™s really bad, tries to run away. Weโ€™re working on it, but, in the meantime, weโ€™ve turned to Apple AirTags as an extra reassurance.

The $29 quarter-sized AirTag attached to her collar keeps track of her location so that we can quickly find her if she ever got away. Itโ€™s mostly for peace of mind โ€” weโ€™ve only had to use it once โ€” but itโ€™s also quickly become an annoying part of my daily routine.

The problem is that the AirTag is registered to my partnerโ€™s device. That means that Apple doesnโ€™t recognize my iPhone in connection with the AirTag, seeing the unknown tracker as a threat to my safety. It sends a notification that thereโ€™s an AirTag following me, which wonโ€™t go away until I acknowledge its presence in the Find My app, and thereโ€™s no way to tell it โ€œhey, thatโ€™s just Rosie!โ€ to disable the recurring notification. Plus, itโ€™ll ping and make sounds to alert me of its presence, causing our already skittish dog confusion.

Screenshot of an iPhone displaying a Tracking Notification. Text reads:
An example of what the unwanted tracking notification looks like and options to proceed.
Katie Malone

These safety features exist for a good reason. They can notify a survivor that theyโ€™re being followed, and put them in control to bring it as proof of stalking to law enforcement, if thatโ€™s something they feel safe doing, Audace Garnett, technology safety project manager at the National Network to End Domestic Violence, told Engadget. In cases like that, AirTagโ€™s persistence may be a welcome way to manage oneโ€™s safety. Competitors like Bluetooth tracker Tile have taken note, implementing a $1 million penalty on people using the product to stalk someone.

โ€œโ€‹โ€‹For us, who are not being stalked or harassed, it may be an annoyance to us,โ€ Garnett said. โ€œBut for someone that's in a domestic violence situation, who is under power and control, this may be a life-saving tool for them.โ€

There are a few viable solutions, but none quite worked for me. The notification provides an option to disable the AirTag, which would be helpful to stop an unwanted third-party from knowing your location. That feature renders the AirTag useless, though, so it would no longer be able to track my dog if she did get out.

There is a way to pause tracking notifications for that specific AirTag, but it only lasts for 24 hours. Disabling Find My notifications didnโ€™t work, so I tried disabling unwanted tracking notifications. That setting disables all unwanted tracking notifications, not just for this specific AirTag. So, if someone were to slip one in my bag, I wouldnโ€™t get those notifications either. (Either way, the AirTag would still ping and make other noises as a back up safety feature for folks without smartphones.)

My partner and I could always open a Family Sharing iCloud, or a joint account that connects our devices. If we did that, I would unlock an option to cancel notifications for Rosieโ€™s AirTag. We currently have separate accounts, though, and arenโ€™t interested in fully merging our clouds. I could also buy any other tracking device to replace it with, like the slew of options available specifically for pets, if I wanted to spend the additional cash to avoid this feature.

Or, I could deal with the minor inconvenience knowing that somewhere out there, this feature is helping someone else stay safe. I think Iโ€™ll go with that.

If you are experiencing domestic violence and similar abuse, you can contact the National Domestic Violence Hotline by phone at 1-800-799-SAFE (7233) or by texting "START" to 88788.

My dog, Rosie, models her AirTag accessory.

A brown dog laying on the sidewalk wears a red collar and harness. An Apple AirTag in a white case is attached to the collar.

Twitterโ€™s 2FA paywall is a good opportunity to upgrade your security practices

Twitter announced plans to pull a popular method of two-factor authentication for non-paying customers last week. Not only could this make your account more vulnerable to attack, but it may even undermine the platformโ€™s security as a whole and set a dangerous precedent for other sites.

Two-factor authentication, or 2FA, adds a layer of security beyond password protection. Weak passwords that are easily guessed by hackers, leaked passwords or phishing attacks that can lure password details out of a user can all lead to unwanted third-party account access.

With 2FA, a user has another guard up. Simply entering a password isnโ€™t enough to gain account access, and instead the user gets a notification via text message, or uses an authenticator app or security key to approve access.

โ€œTwo factor authentication shouldn't be behind a paywall,โ€ Rachel Tobac, CEO of security awareness organization SocialProof Security, told Engadget, โ€œespecially not the most introductory level of two factor that we find most everyday users employing.โ€

Starting March 20, non-subscribers to Twitter will no longer be able to use text message authentication to get into their accounts. The feature will be automatically disabled if users donโ€™t set up another form of 2FA. That puts users who donโ€™t act quickly to update their settings at risk.

If you donโ€™t want to pay $8 to $11 per month for a Twitter Blue subscription, there are still some options to keep your account secure. Under security and account access settings, Twitter users can change to โ€œauthentication appโ€ or โ€œsecurity keyโ€ as their two-factor authentication method of choice.

Software-based authentication apps like Duo, Authy, Google Authenticator and the 2FA authenticator built into iPhones either send you a notification or, in the case of Twitter, generate a token that will let you complete your login. Instead of just a password, youโ€™ll have to type in the six-digital code you see in the authentication app before it grants access to your Twitter account.

Security keys work in a similar way, requiring an extra step to access an account. Itโ€™s a hardware-based option that plugs into your computer or connects wirelessly to confirm your identity. Brands include Yubikey, Thetis, and more.

Security keys are often considered more secure because a hacker would have to physically acquire the device to get in. 2FA methods that require a code to get in, like via text message or authentication app, are phishable, according to Tobac. In other words, hackers can deceive a user into giving up that code in order to get into the account. But hardware like security keys canโ€™t be remotely accessed in the same way.

โ€œCyber attackers don't stand next to you when they hack you. They're hacking you through the phone, email, text message or social media DM,โ€ Tobac said.

Still, putting any 2FA behind a paywall makes it more inaccessible for users, especially if the version put behind the paywall is as widely used as text-based authentication. Fewer people may be inclined to set it up, or they may be ignoring the pop-ups from Twitter to update their accounts so that they can get back to tweeting, Tobac said.

Without 2FA, itโ€™s a lot easier for unauthorized actors to get into your account. More compromised accounts makes Twitter a less secure platform with more potential for attacks and impersonation.

โ€œWhen it's easier for us to take over accounts, myths and disinformation increase and bad actors are going to increase on the site because it's easier to gain access to an account with a large following that you can tweet out whatever you like pretending to be them,โ€ Tobac said.

Twitter CEO Elon Musk implied that paywalling text-message based 2FA would save the company money. The controversial decision comes after a privacy and security exodus at Twitter last fall. In the midst of layoffs, high-level officials like former chief information security officer Lea Kissner and former head of integrity and safety Yoel Roth left the company.

Big Tech Logos

Twitter logo is displayed on a mobile phone screen for illustration photo. Krakow, Poland on February 9, 2023.  (Photo by Beata Zawrzel/NurPhoto via Getty Images)
โŒ