
MetaDAMA - Data Management in the Nordics
This is DAMA Norway's podcast to create an arena for sharing experiences within Data Management, showcase competence and level of knowledge in this field in the Nordics, get in touch with professionals, spread the word about Data Management and not least promote the profession Data Management.
-----------------------------------
Dette er DAMA Norge sin podcast for å skape en arena for deling av erfaringer med Data Management, vise frem kompetanse og kunnskapsnivå innen fagfeltet i Norden, komme i kontakt med fagpersoner, spre ordet om Data Management og ikke minst fremme profesjonen Data Management.
MetaDAMA - Data Management in the Nordics
4#20 - Sune Selsbæk-Reitz - Promptism and the Dangerous Illusion of AI Truth (Eng)
«We need source criticism more then ever now.»
In the season finale of MetaDAMA, we dive deep into the intersection of philosophy, history, and artificial intelligence with guest Sune Selsbæk-Reitz, tech philosopher with a background in both history and philosophy.
Sune introduces the provocative concept of “Promptism”, which is our era’s version of positivism, where we believe that truth can be extracted from language models simply by phrasing the question correctly. But just as historians have learned through centuries of source criticism, we must ask the critical questions: Who trained this model? On what data? With what biases?
Here are Winfrieds key takeaways:
- Are numbers and data points neutral? Or can they be used to convey a message, or even a certain philosophical view point?
- Philosophy is important in data. Here is an example:
- Data Governance according to Immanuel Kant - best possible governance
- Data Governance according to Utilitarism - focus on business value
- Lessons from history studies: question the authenticity of your sources. Who wrote it? For what purpose? Why are you reading it? - same lessons apply to data.
- That is why we need principles and values in AI ethics.
- Over-reliance on the objectivity of math - is math binary? Right or wrong? - this has been introduced into algorithmic thinking and AI.
- This is the reason why «algorithmic authority» is an issue - because the algorithm says «right» doesn’t mean it is right.
- Our mindset is constantly evolving. That’s why we cannot predict tomorrow’s bias. We need to ensure that our systems evolve with us.
What is the real purpose of AI systems? Are the core values only efficiency, automation, or is it human dignity or autonomy?
Promptism:
- A new way of «positivism: Just because its written down its true»
- Promptism is my term for a subtle but growing mindset around the globe, that you can extract truths from a language model, just by wording your prompt well.»
- LLMs are very fluent and flattering - they say what people want to hear.
- «That’s what Large Language models are: You are not getting the truth. You are just getting the most common answer.»
- Objectivity is a myth. It is always subjective, so we need to read not only the text but also peoples intentions with the text.
- Responsibility for understanding at the limitations and needed criticism of LLM output is shared.
- Producers have a responsibility to ensure that you can know, when models are hallucination, guardrails in models to ensure that output is not looked at as the truth.
- Readers are responsible to learn how to read and understand machines.
- Consumers need rot push for transparency.
- Without accountability, trust is eroding.
- Politeness is a way for machines to ensure that users keep using them.
- Shouldn’t rather an LLM as a «conversation partner» challenge you? Disagreement is part of learning.
- «Agreeableness is addictive.»
- People are starting to get influenced by how LLMs are writing. It changes written conversation.
- Is language narrowed down to a certain path defined through AI? Is language becoming controllable?
- LLMs affect our lives in the way we read, write, talk, even think.
- There is a worldview baked into the system.
- Literacy means also critical thinking.