I am becoming ever more concerned for my Cybersecurity as AI’s abilities and accessibility increase. Besides best practices, do you all have any recommendations for strengthening my personal cybersecurity regarding hacks from malicious AI use?
When I say best practices I mean most of the practices Henry mentions in Go Incognito. I don’t have the most extreme threat model and still don’t use Linux. I do use FIDO2 2FA where possible, less extreme 2FA where not possible & Lockdown Mode to name a few.
Thanks in advance!
I wouldn’t say AI is as good as Humans making malicious software. What you are doing is already enough, you are the weakest point of your security. For example in a recent podcast I heard in India they are scamming people by making AI calls having a somewhat similar voice to one of their cousins and asking for money to be sent asap to bribe the police officer or to bail them out. It may seem easy to not fall for these scams, but when you in a hurry, not lucid and if you don’t send those money your cousin will stay in prison for some years, it will not be as easy as you think.
First off - welcome to the forum.
I agree with @PrivacyFounder here, AI is just another tool and I would not say it is any more dangerous than other humans when it comes to hacking or cyberattacks.
If you’re already implementing best practices, using strong unique passwords, and especially using 2FA, I wouldn’t worry too much.
Thank you! I guess I just worry about hackers using AI to automate many tasks that used to require a lot of time, making it easier to pull off more difficult hacks. Anyway, thanks again!
Regarding automation of tasks, hackers have been using scripts for years to achieve similar functionality. For example, there are scripts for attempting to brute force passwords, trying all the different possibilities.
Given you mentioned this, I assume you aren’t being actively targeted, and so you shouldn’t have too much to worry about. Nevertheless, being careful is seldom a bad thing, but especially 2FA is a good practice.
Great! Thanks whiskeyhighball!
Had a discussion about this in a sophisticated setting, especially regarding impersonation. Here was the conclusion we came to:
AI has the ability to enhance your voice. However, the more you use it to empower your voice, the greater it’s ability to hijack your voice, style, and personality and reproduce it in distinct circumstances.
The more data you give an AI, the better it can replicate you and reproduce your writing style. This can be used to forge writing that supposedly came from you.
AI doesn’t just have to strip you of your creative writing abilities. There are news channels on which the reporter is an AI made video. AI can be used to make images and videos of subjects given enough data about that person. (It’s kind of weird to watch a newscast done by AI).
The better an AI knows you, the better it can impersonate you. Therefore, it is pivotal to safeguard your data from AI. AI is now used by Big Tech companies for data analysis and optimizing targeted ads. Personally, I want my personal data to stay as far away from AI as possible.
Ok that’s a bit extreme. Most of the AI that Apple makes is designed to run locally on your iPhone. Most of the AI that Google makes is intended to run on Google’s servers (in the cloud). Therefore, as long as the AI’s training on you given your data doesn’t leave your device, you’re fine.