# ** OpenAI calls for the establishment of an organization to prevent super intelligent artificial intelligence ** (OpenAI)(https://laodong.vn/cong-nghe/openai-gioi-thieu-ung-dung-chatgpt- cho-ios-1194215.ldo)** argues that a watchdog is needed to protect humanity against the risks of “super-intelligent” AI.** OpenAI leaders have called for regulation definition of (artificial intelligence)(https://laodong.vn/cong-nghe/phan-mem-photoshop-bat-dau-duoc-ho-tro-boi-tri-tue-nhan-tao-1196135. ldo) (AI) “super-intelligent”, argues that an agency equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of accidentally creating something destructive , according to the Guardian. In a short note posted on the company’s website, co-founders Greg Brockman and Ilya Sutskever and chief executive officer, Sam Altman, called for an international regulator to be established and work to begin on how “tests the system, requires testing, checks for compliance with security standards, and places restrictions on implementation and security levels” to reduce the “lived risk” that systems such may cause. “It is conceivable that within the next 10 years, AI systems will surpass expert skill levels in most fields and perform many operations as efficiently as one of the largest corporations today. In terms of positives and negatives, superintelligence will be stronger than other technologies that humanity has faced in the past. We can have a much more prosperous future, but we must manage risks to get there. With the potential for risk present, we can’t just react,” wrote OpenAI. In the short term, the OpenAI trio called for “some degree of coordination” between working companies. in the field of advanced AI research, to ensure the development of ever more powerful models that seamlessly integrate with society, while prioritizing safety. implemented through a government-led project or through a collective agreement to limit the growth of AI capabilities, researchers have warned of the potential risks of superintelligence in decades, but as the development of AI has accelerated, those risks have become more specific.The US-based Center for AI Safety (CAIS) works to “reduce risks posed by intelligence artificial intelligence posed at the social scale”, describes 8 types of “catastrophic” and “existential” risks that AI development can pose.While some worry about an (AI) (https://laodong.vn/cong-nghe/sieu-may-tinh-nhanh-nhat-the-gioi-tham-gia-phat-trien-tri-tue-nhan-tao-1195692.ldo) will strongly the complete destruction of humanity, whether unintentionally or intentionally, CAIS describes other, more dangerous harms. A world in which AI systems are assigned more voluntary labor than ever before could lead to humanity “losing self-control and becoming completely dependent on machines”, described as “weakness” and a small group of people controlling powerful systems can “turn AI into a centralized force”, leading to “locked value”, an eternal caste system. between the ruler and the ruled. To those risks, “people around the world should democratically decide on limits and defaults for AI systems,” OpenAI leaders say, but conceded that “we are.” don’t know how to design such a mechanism.” However, OpenAI thinks that continuing to develop powerful systems is worth the risk.