printouts
{"Answer":["Making a narrow AI for every task would be extremely costly and time-consuming. By making a more general intelligence, you can apply one system to a broader range of tasks, which is economically and strategically attractive.\n\nOf course, for generality to be a good option there are some necessary conditions. You need an architecture which is straightforward enough to scale up, such as the transformer which is used for GPT and follows scaling laws. It's also important that by generalizing you do not lose too much capacity at narrow tasks or require too much extra compute for it to be worthwhile.\n\nWhether or not those conditions actually hold it seems like many important actors (such as DeepMind and OpenAI) believe that they do, and are therefore focusing on trying to build an AGI in order to influence the future, so we should take actions to make it more likely that AGI will be developed safety.\n\nAdditionally, it is possible that even if we tried to build only narrow AIs, given enough time and compute we might accidentally create a more general AI than we intend by training a system on a task which requires a broad world model.\n\nSee also:\n* [https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/ Reframing Superintelligence] - A model of AI development which proposes that we might mostly build narrow AI systems for some time."],"StampedBy":["plex"],"Tags":[{"fulltext":"Agi","fullurl":"https://stampy.ai/wiki/Agi","namespace":0,"exists":"1","displaytitle":"agi"},{"fulltext":"Narrow ai","fullurl":"https://stampy.ai/wiki/Narrow_ai","namespace":0,"exists":"1","displaytitle":"narrow ai"},{"fulltext":"Incentives","fullurl":"https://stampy.ai/wiki/Incentives","namespace":0,"exists":"1","displaytitle":"incentives"},{"fulltext":"Comprehensive ai services","fullurl":"https://stampy.ai/wiki/Comprehensive_ai_services","namespace":0,"exists":"1","displaytitle":"comprehensive ai services"},{"fulltext":"Ai takeoff","fullurl":"https://stampy.ai/wiki/Ai_takeoff","namespace":0,"exists":"1","displaytitle":"ai takeoff"},{"fulltext":"Language models","fullurl":"https://stampy.ai/wiki/Language_models","namespace":0,"exists":"1","displaytitle":"language models"},{"fulltext":"Scaling laws","fullurl":"https://stampy.ai/wiki/Scaling_laws","namespace":0,"exists":"1","displaytitle":"scaling laws"}]}
answer
["Making a narrow AI for every task would be extremely costly and time-consuming. By making a more general intelligence, you can apply one system to a broader range of tasks, which is economically and strategically attractive.\n\nOf course, for generality to be a good option there are some necessary conditions. You need an architecture which is straightforward enough to scale up, such as the transformer which is used for GPT and follows scaling laws. It's also important that by generalizing you do not lose too much capacity at narrow tasks or require too much extra compute for it to be worthwhile.\n\nWhether or not those conditions actually hold it seems like many important actors (such as DeepMind and OpenAI) believe that they do, and are therefore focusing on trying to build an AGI in order to influence the future, so we should take actions to make it more likely that AGI will be developed safety.\n\nAdditionally, it is possible that even if we tried to build only narrow AIs, given enough time and compute we might accidentally create a more general AI than we intend by training a system on a task which requires a broad world model.\n\nSee also:\n* [https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/ Reframing Superintelligence] - A model of AI development which proposes that we might mostly build narrow AI systems for some time."]