© 2023 41 Pushups, LLC
They are already here and have been. Think about it or has this concept eluded you as well?
Pure futuristic bunk.
There are risks attendant to current generation and near-term future Large Language Model AI systems but those risks are primarily in the form of creating realistic disinformation to be used by bad actors. This code is not thinking and therefore has no intents or feelings. It is cleverly organizing and presenting extant information available on the Internet and therefore lowering public trust in government and the media.
Most if not all deep experts believe that the benefits conferred by LLM AI are orders of magnitude greater than the risks.
As for controlling some future super-intelligent machine that could “think:” These things require massive amounts of electrical power. Just make sure there are redundant and safeguarded physical master switches beyond electronic control.
You can read detail on the Whitehouse plan to regulate AI here. It contains many interesting and useful details.
My post above presented some information out of order. Read:
. . . In the wrong hands, realistic disinformation could be presented to further lower public trust in government and the media.
Yes, there is – not only through Asimov’s Three Laws but also a mandatory program insert forbidding the AI from rewriting or reinterpreting the three laws.
Your email address will not be published. Required fields are marked *