The good guys vs bad guys argument always reminds me of the generator vs discriminator balance in a GAN. If done properly, the GAN will reach a Nash equilibrium where the generator (in this case, the spammer) produces data that is indistinguishable from real data, reducing the discriminator (in this case the spam filter) to making random guesses. This will probably not assuage your fear. ;-)
Agree. Fan of Reality Defender. Eerily surprised when, for an exercise, they demonstrated with my LinkedIn photo how easy it is to simulate a version of me. Different, but not too different from my likeness.
I wouldn't be surprised if there is a crop of firms who try to solve the problem of authentication at the individual level. My concern is that a grandmother may not be able to determine that a synthesized voice of a grandchild in need of help is simulated.
Agree. Going one step further, that authentication may create walled gardens - online environments that guarantee participants are 1. actual humans and 2. are who they say they are. Different ways to productive and monetize that.
@Cecilia - Couldn't agree more, especially given the fact that people were already falling for spoofing calls long before AI voice clones existed with the help of a little social engineering.
Bad actor: "Grandma I was in a car accident and my insurance won't cover the ambulance - I need you to send money so I can get to the hospital!"
Grandma: "That doesn't sound like you dear"
Bad actor: "I broke my nose in the crash"
AI voice spoofing + social engineering = a very scary situation. This is something that's very top-of-mind for us as we build SilverShield.
The good guys vs bad guys argument always reminds me of the generator vs discriminator balance in a GAN. If done properly, the GAN will reach a Nash equilibrium where the generator (in this case, the spammer) produces data that is indistinguishable from real data, reducing the discriminator (in this case the spam filter) to making random guesses. This will probably not assuage your fear. ;-)
Agree. Fan of Reality Defender. Eerily surprised when, for an exercise, they demonstrated with my LinkedIn photo how easy it is to simulate a version of me. Different, but not too different from my likeness.
I wouldn't be surprised if there is a crop of firms who try to solve the problem of authentication at the individual level. My concern is that a grandmother may not be able to determine that a synthesized voice of a grandchild in need of help is simulated.
Agree. Going one step further, that authentication may create walled gardens - online environments that guarantee participants are 1. actual humans and 2. are who they say they are. Different ways to productive and monetize that.
@Cecilia - Couldn't agree more, especially given the fact that people were already falling for spoofing calls long before AI voice clones existed with the help of a little social engineering.
Bad actor: "Grandma I was in a car accident and my insurance won't cover the ambulance - I need you to send money so I can get to the hospital!"
Grandma: "That doesn't sound like you dear"
Bad actor: "I broke my nose in the crash"
AI voice spoofing + social engineering = a very scary situation. This is something that's very top-of-mind for us as we build SilverShield.