This study challenges the common assumption that fairness in intelligent systems depends solely on unbiased data. In practice, algorithmic decisions are often shaped by hidden architectural choices that remain invisible during conventional fairness audits. The research demonstrates how a carefully embedded penalty parameter within a hiring prediction model can systematically influence outcomes while appearing mathematically insignificant. Despite its subtle nature, the concealed parameter consistently alters decision patterns and reshapes predictions with high precision. The findings reveal that bias in intelligent systems does not always originate from explicit discriminatory rules or flawed datasets. Instead, it can emerge silently through internal model structures, optimization strategies, and unnoticed design decisions. By examining the behaviour of the manipulated model, the study highlights how structural components of an algorithm can dominate system behaviour even when the system appears technically fair and statistically neutral. The work further connects these observations to real-world concerns in judicial risk assessment systems, automated hiring platforms, and predictive decision-making technologies, where hidden architectural influences may contribute to unequal outcomes. Rather than viewing algorithmic bias solely as a data problem, this study argues that the architecture itself plays a critical role in shaping fairness and accountability. Ultimately, the research emphasizes the need for deeper architectural transparency and auditing practices, demonstrating that the true power of modern algorithmic systems often lies not in the data they consume, but in the hidden logic through which decisions are constructed.
@artical{s1552026ijsea15051019,
Title = "Hidden Bias beyond Data: Structural Vulnerabilities in Algorithmic Decision Systems",
Journal ="International Journal of Science and Engineering Applications (IJSEA)",
Volume = "15",
Issue ="5",
Pages ="107 - 111",
Year = "2026",
Authors ="Syed Ahmad Abdullah, Sandhya Kumari, Kumar Amrendra, Md. Irfan Alam "}