While some people seem concerned about the harm AI could do to humanity as a whole, many big tech companies are more concerned about what these third-party platforms could do with their sensitive data. OpenAI is closely associated with Microsoft and it makes sense that the company’s closest competitor would be more cautious about its products. Reportedly, employees have been using the template to simplify a variety of tasks, including writing emails and producing reams of code. Apple has remarkably tight security, and would likely prefer it if customer data and classified product information weren’t fed into a program in which a nearby competitor is actively invested.
Similarly, Samsung is one of the companies that has banned the use of external generative AI in its workforce, after it was discovered that some employees had shared “sensitive code” with the platform, according to the company. bloomberg. This report, based on a leaked internal memo, claims that Samsung was concerned about storing its data on a third-party server outside its control.
It is worth mentioning that OpenAI recently added additional privacy options. Users can now turn off their chat logs and request that their input not be used to train the language model. However, enabling these options does not make your data 100% private. OpenAI claims that it still monitors all chats “for abuse”. It’s unclear what exactly this means, but it likely refers to messages that might break the rules, messages that quickly turn orange or red. Likewise, all data remains saved in the file for 30 days before it is deleted.
ليست هناك تعليقات:
إرسال تعليق