Barry Diller, jo ek famous businessman hain, ne OpenAI CEO Sam Altman ka bachav kiya hai. Unhone kaha ki unhe Sam Altman par bharosa hai. Lekin saath hi unhone ek badi chetawani bhi di.
Diller ne kaha ki AGI (Artificial General Intelligence) ke saamne ‘trust irrelevant’ ho jata hai. AGI ek aisi technology hai jo insanon se bhi zyada intelligent ho sakti hai. Diller ke mutabik, AGI unpredictable hai aur iske liye guardrails yaani safety measures zaroori hain.
Sam Altman par bharosa, lekin AGI alag hai
Diller ne Sam Altman ko defend karte hue kaha ki unhe Altman par bharosa hai. Lekin unhone yeh bhi clear kiya ki AGI ke mamle mein trust ka koi matlab nahi. AGI ek aisi cheez hai jo control se bahar ho sakti hai, chahe uske peeche koi bhi ho.
AGI ke liye guardrails kyun zaroori?
Diller ke hisaab se, AGI ek unpredictable force hai. Isliye iske liye strong guardrails yaani safety measures hone chahiye. Agar AGI ko bina kisi limit ke develop kiya gaya, toh yeh dangerous ho sakta hai. Diller ne yeh baat isliye kahi kyunki AGI ki taraf duniya tezi se badh rahi hai.
Hamaari Baat: Trust se zyada important hai safety
Barry Diller ki baat mein dum hai. Sam Altman par bharosa karna ek baat hai, lekin AGI ek alag level ki technology hai. AGI ke saamne kisi ek insaan ka trust irrelevant ho jata hai. Humein AGI ke liye strong rules aur safety measures chahiye, chahe woh koi bhi develop kare. Diller ne sahi kaha ki guardrails zaroori hain, warna AGI unpredictable aur dangerous ho sakta hai.