
Brady D Lund
Brady D. Lund, Ph.D., is an assistant professor of information science at the University of North Texas. He is an expert on the ethical integration of artificial intelligence and other emerging computing technologies into society and higher education contexts. His work has been cited in over 6000 publications since 2022 and he has been included in Clarivate’s list of top 2% most cited researchers annually since 2023. He teaches classes at the undergraduate and graduate levels on cyber ethics, information science, and academic research and advises students at all levels across a wide variety of disciplinary homes who are interested in the integration of technologies in society. Brady also co-directs the computational humanities and information literacy lab (CHILL) and is active in leadership roles in the Association for Information Science and Technology (ASIS&T).
Google Scholar: https://scholar.google.com/citations?user=IGZZD-UAAAAJ&hl=en
Google Scholar: https://scholar.google.com/citations?user=IGZZD-UAAAAJ&hl=en
less
InterestsView All (11)
Uploads
Papers by Brady D Lund
This paper provides a comparative analysis of global standards, frameworks, and legislation for artificial intelligence (AI) transparency, emphasizing the importance of trust, accountability, and ethical AI development. It investigates how regions such as the United States, European Union, China, and Japan approach transparency in AI systems and identifies best practices and unresolved challenges.
Design/methodology/approach
The study conducts a cross-jurisdictional review of key transparency initiatives, including the IEEE P7001 Standard, the CLeAR Documentation Framework, and recent government actions such as U.S. Executive Order 14110. It analyzes transparency mechanisms—such as documentation, risk-tiered systems, stakeholder communication, and explainability—across legal, policy, and technical domains.
Findings
Despite regional differences, common principles in AI transparency include:
Tiered transparency based on risk and system impact.
Continuous documentation across development cycles.
Tailored explanations for diverse stakeholders.
Challenges persist in balancing transparency with privacy, intellectual property, and security concerns, especially amid rapid AI innovation.
Originality/value
This paper contributes to the growing body of research on AI governance and regulation by synthesizing current transparency standards and proposing the need for adaptive, sector-specific regulatory models. It offers a framework for policymakers, developers, and researchers to understand emerging transparency obligations while supporting innovation.
Design/methodology/approach: A secondary data analysis was conducted using the 2018 Health and Retirement Study (HRS). Path analysis and a correlation matrix were used to examine relationships among eleven modifiable variables: anxiety, sociability, self-esteem, intellectual curiosity, religiosity, closeness with family and friends, socioeconomic status, work status/demands, physical health, life control, and life satisfaction.
Findings: The study identified intellectual curiosity, self-esteem, and sociability as direct predictors of digital communication technology use among older adults. Other variables—including anxiety, religiosity, and work demands—showed indirect or moderating effects. Notably, higher anxiety and increased work demands were associated with lower use of digital communication technologies, while religiosity had both positive and negative associations depending on the pathway.
Originality/value: Unlike prior studies that focus on demographic or technical barriers, this research emphasizes modifiable psychological and social predictors of digital communication technology adoption in later life. The findings offer insights for designing interventions to support socially isolated older adults by promoting technology use for meaningful social connection.