AI Output (left) vs. Bwog.com version (right)

Created Diff never expires
22 removals
20 lines
27 additions
24 lines
Last night, Publisher Zack Abrams attended the event Ethics in AI Content Generation hosted by the Journalism School. Here's his report from the event.
Last night, Publisher Zack Abrams attended the event Ethics in AI Content Generation hosted by the Journalism School. Here’s his report from the event.


At the Journalism School on Wednesday evening, two distinguished speakers bridged the gap between academia and policy-making. Prof. Aida Di Stefano, Director of the Center for Ethics and Knowledge at the Committee on Global Thought, introduced Martin Waldman, Science editor for Salon and the author of Power Talk. He provided an emotional profile of his alma mater, and talked about his forthcoming book about AI, Truth, Lies, and Truth’s Resistance. Prof. Waldman, a political science professor at Columbia, is interested in "how ethics relates to technology and colonialism," including the topic of “AI censorship” in light of recent revelations.
At the Journalism School on Wednesday evening, two distinguished speakers bridged the gap between academia and policy-making. Prof. Aida Di Stefano, Director of the Center for Ethics and Knowledge at the Committee on Global Thought, introduced Martin Waldman, Science editor for Salon and the author of Power Talk. He provided an emotional profile of his alma mater and talked about his forthcoming book about AI, Truth, Lies, and Truth’s Resistance. Prof. Waldman, a political science professor at Columbia, is interested in “how ethics relates to technology and colonialism,” including the topic of “AI censorship” in light of recent revelations.


The discussion focused on the topic of ethics in AI content generation and distributed information. Initially, this subject was not too contentious, with many audience members expressing interest in what brought about this conversation. Indeed, Prof. Waldman was initially (and unnecessarily) defensive about his claims regarding the ethics of his own book.
The discussion focused on the topic of ethics in AI content generation and distributed information. Initially, this subject was not too contentious, with many audience members expressing interest in what brought about this conversation. Indeed, Prof. Waldman was initially (and unnecessarily) defensive about his claims regarding the ethics of his own book.


But in the second half of the discussion, both speakers took a more optimistic tone regarding the future of AI. One audience member, an undergraduate student with great enthusiasm for science and scholarship, enthusiastically recommended that Prof. Waldman develop a discussion between himself and the Columbia Political Union’s 1.8 million member Facebook group. Prof. Waldman was immediately drawn to the possibility of this opportunity. He answered, and gently chuckled when another member of the audience joked that they would get it "in a box of Chegg cards." There was some confusion in the room, as Prof. Waldman was not sure that their conversation was still commensurate with the conversation.
But in the second half of the discussion, both speakers took a more optimistic tone regarding the future of AI. One Zoom audience member, an undergraduate student with great enthusiasm for science and scholarship, enthusiastically recommended that Prof. Waldman develop a discussion between himself and the Columbia Political Union’s 1.8 million-member Facebook group. Prof. Waldman was immediately drawn to the possibility of this opportunity. He answered, and gently chuckled when another member of the audience joked that they would get it “in a box of Chegg cards.” There was some confusion in the room, as Prof. Waldman was not sure that their conversation was still commensurate with the conversation.


The conversation then moved away from the larger ethical and social issues, and more into the specific ethics and politics to which the main characters of Power Talk were subject. Prof. Waldman outlined the range of these characters: big data, politics, ethics, and the press. He also promised to end up in a conversation to discuss the “sense of ethics” that can exist between humans and AI, and the recent wave of protests over (a third) AI rule making.
The conversation then moved away from the larger ethical and social issues, and more into the specific ethics and politics, to which the main characters of Power Talk were subject. Prof. Waldman outlined the range of these characters: big data, politics, ethics, and the press. He also promised to end up in a conversation to discuss the “sense of ethics” that can exist between humans and AI, and the recent wave of protests over (a third) AI rulemaking.


"There’s a real tension between, as a machine, ‘just doing your job’ and recognizing that this is a machine. There’s a real question of whether it's appropriate to be, as Brian Faure, an MIT professor, called it, a ‘one-part automaton’ or a ‘multipotent machine’," Prof. Waldman explained. In the current financial and political climate, the shift in ethical concerns is shifting toward a future in which there are more or less fully autonomous systems capable of reasoning. Prof. Waldman then went on to explain his personal politics and values on the topic of ethics.
“There’s a real tension between, as a machine, ‘just doing your job’ and recognizing that this is a machine. There’s a real question of whether it’s appropriate to be, as Brian Faure, an MIT professor, called it, a ‘one-part automaton’ or a ‘multipotent machine’,” Prof. Waldman explained. In the current financial and political climate, the shift in ethical concerns is moving toward a future in which there are more or less fully autonomous systems capable of reasoning. Prof. Waldman then went on to explain his personal politics and values on the topic of ethics.


He identified himself as a liberal and staunch supporter of idealism, but ultimately expressed a frustration with those who run things according to their own agenda. The direct challenges to his own moral-political beliefs came with a couple of lines from the philosopher, Nietzsche, which began with, “there is no intelligence other than that which is built by the wisdom of others,” and ended with, “there is no ethics other than that which is built by others and their own interests.”
He identified himself as a liberal and staunch supporter of idealism, but ultimately expressed frustration with those who run things according to their own agenda. The direct challenges to his own moral-political beliefs came with a couple of lines from the philosopher, Nietzsche, which began with, “there is no intelligence other than that which is built by the wisdom of others,” and ended with, “there is no ethics other than that which is built by others and their own interests.”


Prof. Waldman went on to give an overview of his current book, which, for brevity, will be published this fall. The paper begins with an address in which Prof. Waldman acknowledges and commends the concern that at present time, especially because of trends of financial instability, that the world is not yet able to achieve a model free of mistrust, with the theme of "cloud neutrality," [is] very relevant to his book. He goes on to discuss his hope that the AI revolution, as it is being planned in part through MIT’s president Anne Steingraber, can also serve as a counterweight to the current trend of economic decline, and to a more perfect society.
Prof. Waldman went on to give an overview of his current book, which, for brevity, will be published this fall. The paper begins with an address in which Prof. Waldman acknowledges and commends the concern that at present time, especially because of trends of financial instability, that the world is not yet able to achieve a model free of mistrust, with the theme of “cloud neutrality,” being [is] very relevant to his book. He goes on to discuss his hope that the AI revolution, as it is being planned in part through MIT’s president Anne Steingraber, can also serve as a counterweight to the current trend of economic decline, and help form a more perfect society.


"Good AI is actually the first step to achieving that utopian vision," he said. "Humanity would be better off seeing the power of AI in the humanities."
“Good AI is actually the first step to achieving that utopian vision,” he said. “Humanity would be better off seeing the power of AI in the humanities.”


His talk was followed by a Q&A period. When asked by a member of the audience whether he feels that AI is displacing jobs, Prof. Waldman responded, “Yes, I think it is displacing jobs,” but he went on to defend this decision, and emphasized that the promise of future jobs does not depend on the amount of work done. He cited new suggestions from Microsoft that eventually may provide the opportunity to automate these tasks. In summary, Prof. Waldman attempted to explain his sources and reasoning on a narrative level.
His talk was followed by a Q&A period. When asked by a member of the audience whether he feels that AI is displacing jobs, Prof. Waldman responded, “Yes, I think it is displacing jobs.” But, he went on to defend this decision and emphasized that the promise of future jobs does not depend on the amount of work done. He cited new suggestions from Microsoft that eventually may provide the opportunity to automate these tasks. In summary, Prof. Waldman attempted to explain his sources and reasoning on a narrative level.

The Future via Pixabay