Representing Google Cloud and DeepMind at “AI, Ethics, & Society”
Note: our research paper was first published the Cornell University Journal of Technology and Society.
This journal, hosted by Cornell University’s Department of Science & Technology Studies, emphasizes interdisciplinary analysis across domains like digital labor, ethics, AI policy, and environmental tech design. Research featured in the journal typically examines how new and existing technologies influence culture.
San Jose buzzed with excitement as experts and enthusiasts from around the world converged for the "AI, Ethics, and Society" conference.
It was a privilege to be surrounded by some of the brightest minds in AI.
Over the course of the conference, Richard and I had the opportunity to engage with PhD students and professors, each with their unique perspectives and insights on the most pressing issues related to AI and ethics. The discussions were stimulating and thought-provoking, delving into topics ranging from algorithmic bias and data privacy to the societal impact of AI and the future of work.
Our research explored how creating a structured ontology—an organized framework of concepts—can help map diverse belief systems for more inclusive, belief-aware AI. We proposed a community-centered approach to constructing this ontology, relying on input from various belief systems to ensure representation. This methodology aimed to foster inclusivity in AI models, which often struggle with fairness, bias, and stereotyping. By aligning an ontology with epistemological justification—the way different beliefs form and validate knowledge—the framework helps AI systems interpret beliefs in culturally sensitive ways.
One of the highlights of the conference was the opportunity to present a poster board.
Our study underscores challenges, such as cultural context, where terms have different meanings in distinct cultures, and interpretation variability, where even shared beliefs can hold different meanings within the same community. Despite these complexities, an inclusive ontology offers benefits like improved representation, bias reduction, and enhanced cross-cultural understanding. This is especially crucial in AI's data and modeling pipelines, which need belief-sensitive language to avoid misinterpretations and harm, aligning outputs with core belief values while balancing helpfulness and harm reduction.
The poster session was a fantastic platform to share our work with fellow researchers, receive valuable feedback, and engage in in-depth discussions about our findings.
The "AI, Ethics, and Society" conference reignited my passion for learning and exploration.
Now, I'm eagerly researching Master's programs focused on the intersection of Data Governance, AI, and Ethics. Institutions like Brown University, Cambridge University, and Oxford University, with their rich academic traditions and cutting-edge research, are particularly exciting prospects. This feels like the perfect next step in my journey to contribute meaningfully to the responsible development and deployment of AI.
Reflecting on my time at the "AI, Ethics, and Society" conference, I am filled with gratitude for the opportunity to have represented Google at such a significant event. The experience was both humbling and empowering, reminding me of the immense responsibility that comes with developing and deploying AI technologies. As AI continues to shape our world in profound ways, it is imperative that we prioritize ethical considerations and ensure that these technologies are used for the benefit of humanity.