Gujarat High Court NewsHigh Court NewsLatest Legal News

Gujarat High Court Bans Use of AI in Judicial Decision-Making, Holds Judges Personally Liable for AI-Assisted Outputs

The Gujarat High Court has introduced a strict policy regulating the use of artificial intelligence in judicial functioning, clearly prohibiting its role in decision-making processes. The move aims to preserve judicial independence and ensure that human reasoning remains central to the administration of justice.

Under this policy, judges and court staff are barred from using AI tools for drafting orders, preparing judgements, or making any adjudicatory decisions. The court has clarified that AI cannot be used for judicial reasoning, sentencing considerations, or determining findings of fact or law.

The policy has been framed under Articles 225 and 227 of the Constitution and is rooted in the fundamental right to a fair hearing under Article 21. It applies widely to judicial officers, court staff, legal assistants, interns, and para-legal volunteers across the High Court and district judiciary.

Importantly, even indirect reliance on AI is restricted. The use of AI for sorting evidence, analysing testimony, assessing credibility, or organising evidentiary material has been expressly prohibited. The court has emphasised that such functions involve critical human judgment and cannot be delegated to automated systems.

At the same time, the policy permits limited use of AI for assistive purposes. These include legal research, identifying precedents, and improving the language or structure of drafts. However, such usage must always be verified independently through authoritative sources and cannot influence substantive reasoning.

The responsibility for any AI-assisted output rests entirely with the individual using it. The policy makes it clear that once a document is signed or authenticated, the user becomes fully liable for any errors or inaccuracies. It further states, “The use of AI does not constitute a defence to a finding of error, misconduct, or professional negligence.”

The court has also mandated strict safeguards regarding data privacy. Entry of sensitive or personal information into public AI tools has been completely prohibited. This includes details of parties, witnesses, pending proceedings, and confidential legal strategies.

Additionally, judicial officers and staff must disclose any use of AI in preparing research notes or briefs. Transparency in such usage has been made a mandatory requirement to ensure accountability.

The policy also recognises the risk of bias in AI systems. It cautions against reliance on outputs that may reinforce discrimination based on gender, caste, religion, or socio-economic status, thereby safeguarding fairness in judicial outcomes.

Any violation of these rules will be treated as misconduct and may lead to disciplinary action. The consequences may also extend to civil or criminal liability under applicable laws, including the Information Technology Act, 2000 and the Bharatiya Nyay Sanhita, 2023.

Through this policy, the Gujarat High Court has drawn a clear boundary between technological assistance and judicial responsibility, reinforcing that justice must remain a fundamentally human function.

 

——————————————–

Have a case update, article, or deal to share? Courtroom Today welcomes contributions from lawyers, law firms, and legal professionals. Write to contact@courtroomtoday.com