The Role of Large Language Models in Patent Law: Insights from EPO Decision T 1193/23

Summary:  On April 15, 2025, the European Patent Office (EPO) Boards of Appeal issued a significant decision in case T 1193/23, addressing a patent dispute involving a method for safely starting and stopping a rotor in a rotor spinning machine (European Patent No. 3118356). This decision not only resolved issues of novelty and inventive step but also made a noteworthy statement about the role of large language models (LLMs) like ChatGPT in patent law, explicitly clarifying that such models do not qualify as a “person skilled in the art.” This blog post delves into the claims at issue, the citation regarding LLMs, and the arguments surrounding the interpretation of the claims in relation to the skilled person, offering insights into the evolving intersection of artificial intelligence and patent law.

                                Background 

The case arose from an appeal by the opponent (Saurer Spinning Solutions GmbH & Co. KG) against the Opposition Division’s decision to uphold the patent held by Rieter CZ s.r.o. The patent concerned a method and apparatus for safely starting and/or stopping a rotor in a rotor spinning machine, a critical process in textile manufacturing to ensure operational safety and prevent damage. The appeal challenged the patent’s novelty and inventive step under Articles 54 and 56 of the European Patent Convention (EPC), with key disputes centering on the interpretation of claim terms and their disclosure in prior art documents D3 (EP 1904754 B1) and D6 (EP 1612308 A2).

The patent EP3118356B1 (revoked) may be accessed from the link at

EPO Register.

The Abstract ( US Eqvt. patent provides / US10443158B2): A method is provided for the safe starting and/or stopping of a rotor of a rotor spinning machine for the production of yarn, the spinning machine having a multiple number of rotors that rotate in a respective rotor housing covered by a lid. Each rotor is driven by its own motor and is held in at least one radially and/or axially active magnetic bearing by means of a position controller. Each motor is in communication with a control unit through a data connection to control the rotor in various operating states. One or more of the following conditions is checked against predetermined target values or states: (1) control for the drive of the rotor; (2) position control for the active magnetic bearings; (3) data connection for controlling the motor. In the event that the predetermined target values or states are not met, start of the rotor is blocked or the motor that is already running is selectively stopped.

                             Claims at Issue

The decision focused on the claims of the main request and auxiliary requests 1 to 5.  The key claims as outlined in the decision are:
Main Request – Claim 1
Claim 1 of the main request described a method for safely starting and/or stopping a rotor in a rotor spinning machine.  
For the purpose of explanation, the claim from the US Equivalent is extracted below.  The German claims are pari materia

1. A method for the safe starting and/or stopping of a rotor of a rotor spinning machine for the production of yarn, the spinning machine having a multiple number of rotors that each rotates in a rotor housing covered by a lid, whereas each rotor is driven by its own motor and is held in at least one radially and/or axially active magnetic bearing by means of a position controller, each of the motors in communication with a control unit through a data connection to control the rotor in various operating states, the method comprising:

checking one or more of the following conditions against predetermined target values or states: (1) control for the drive of the rotor; (2) position control for the active magnetic bearings; (3) data connection for controlling the motor;

in the event that the predetermined target values or states are not met, start of the rotor is blocked or the motor that is already running is selectively stopped; and

wherein in the event of a loss of a power supply to the motor, the speed of the rotor is slowly reduced in coordination with additional drives that are necessary for yarn production in order to maintain yarn production as the rotor speed is reduced. 

                                                                                        Large Language Models

A pivotal aspect of the decision was the Board’s stance on the use of LLMs, more specifically ChatGPT, in interpreting patent claims. The respondent (patent proprietor) referenced ChatGPT responses during the oral hearing to support their interpretation of terms like “position control” and “check” versus “monitor.” The Board addressed this in sections 1.1.1 and 1.1.6 of the decision, citing a prior decision (T 0206/22) to reinforce its position:


Headnote: The general increase in the spread and use of chatbots based on language models (“large language models”) and/orartificial intelligencealone does not justify the assumption that a received answer which is based on training data unknown to the user and can also depend sensitively on the context and the precise formulation of the question(s) – necessarily correctly reflects the expert’s understanding of the respective technical field (at the relevant time) (see 1.1.1). Cleaned


1.1.1 The respondent has in the oral hearing before the Board of Appeal on various terms used in claim 1, in particular “position control” and “check” compared to “monitor”, reference was made to answers received from the Chatbot ChatGPT in response to inquiries regarding this matter. The respondent has not submitted the extensive oral …, partly bullet-pointed answers to the file in writing.

The precise content cannot therefore be taken into account for the present decision. However, the Chamber notes in this context that the answer of ChatGPT itself is irrelevant, since it is Interpretation of the claim to understand the

skilled person (see also T 206/22, Reasons 1.). The general increase in the spread and use of chatbots based on language models (“large language models”)
and/or

based on “artificial intelligence” still justifies not the assumption that a received answer – the …

based on training data unknown to the user and can also be sensitive to the context and the precise formulation of the question(s) – necessarily correctly reflects the expert’s understanding of the respective technical field (at the relevant time). The proof of how certain terms in the claim of a patent (or a patent application) interpreted by a person skilled in the art can be documented, for example, by appropriate specialist literature. The alleged different …

The proof of how certain terms in the claim of a patent (or a patent application) interpreted by the person skilled in the art can be documented, for example, by appropriate specialist literature. Cleaned

                                                                                          Key Finding 

This citation underscores that LLMs, despite their advanced capabilities, do not embody the expertise or perspective of a person skilled in the art, as their responses rely on opaque training data and can vary based on query phrasing.
 
                                                                  No Absolute Bar to LLM Evidence
The Board did not impose a complete bar on using LLM-generated evidence. It noted that the respondent’s failure to substantiate ChatGPT’s responses with additional proof, such as technical literature, was a key reason for their dismissal (section 1.1.1). This suggests that LLMs could be used as supplementary tools if paired with robust evidence, such as peer-reviewed publications or industry standards, to align with the skilled person’s perspective. This nuanced stance opens the door for future use of AI in patent proceedings, provided it is properly contextualized.

                                                                    Claim Construction  | Who is a Person Having Skill in the Art

The Board’s analysis of the claims and their novelty/inventive step hinged on how a person skilled in the art would interpret key terms, particularly “position control,” “check,” and the scope of features 8.1, 8.2, and 9. The arguments and findings included:
  1. Respondent’s Reliance on ChatGPT:
    • The respondent argued that “position control” (feature 6 and 8.1) refers to a device, not a process, and that “check” implies a one-time verification rather than continuous monitoring. They supported this by citing ChatGPT responses, suggesting that the AI’s interpretation aligned with their view.
    • The Board rejected this approach, emphasizing that the person skilled in the art relies on technical expertise and specialist literature, not AI-generated responses.  The lack of submitted ChatGPT responses in writing further limited their consideration (section 1.1.1).
  2. Interpretation of “Position Control”:
    • The appellant argued that “position control” could encompass both a device and a process, as the claim was directed to a method. The Board agreed, noting that the claim’s context did not restrict “position control” to a device alone (section 1.1.7). The phrase “by means of a position control” could be interpreted as a process, consistent with the method claim’s focus on procedural steps.
  3. Interpretation of “Check” vs. “Monitor”:
    • The respondent contended that “check” (features 8.1 and 8.2) meant a one-time verification, distinct from continuous monitoring, and cited ChatGPT to differentiate these terms. The Board disagreed, finding that “check” could include monitoring, especially since feature 9’s consequences (blocking or stopping the rotor) implied repeated checks during operation to ensure safety (section 1.1.6). The patent’s description (paragraphs 10 and 13) supported this broader interpretation, referring to “monitoring of the position control.”
  4. Scope of Features 8.1, 8.2, and 9:
    • The Board found that the claims did not specify how or what aspects of position control or data connection were checked, allowing a broad interpretation (section 1.1.8). This included checking parameters like bearing air gaps, as disclosed in D3, which the Board deemed sufficient to meet features 8.1 and 9 (section 1.4.2).
    • For feature 8.2, the Board considered a data connection check implicit in D3, as it was a prerequisite for the machine’s operation, despite not being explicitly mentioned (section 1.5.3).
  5. Skilled Person’s Perspective:
    • The Board emphasized that the person skilled in the art, with expertise in rotor spinning machines, would interpret the claims based on technical knowledge and context, not AI outputs. For instance, features 3 and 4 (multiple rotors in lidded housings) were implicitly disclosed in D3, as such configurations were standard in the field (section 1.3.4). 

                                             Implications for patent law 

The T 1193/23 decision has significant implications for the use of AI in patent proceedings:
  • LLMs as Tools, Not Experts: The Board’s rejection of ChatGPT as a proxy for the skilled person reinforces that LLMs are tools for information retrieval, not substitutes for human expertise. Their outputs lack the technical grounding and reliability required for legal interpretations.
  • Claim Interpretation Standards: The decision underscores the importance of grounding claim interpretations in specialist literature and technical context, ensuring consistency in patent examination.
  • AI in Patent Practice: While LLMs can assist in drafting or analyzing patents, their role in legal arguments must be carefully limited, especially in disputes requiring the perspective of a skilled person.

                                             Comparing EPO and USPTO Approaches to AI/LLM Usage

The EPO and USPTO adopt distinct approaches to AI/LLM usage in patent proceedings, reflecting their differing legal frameworks and priorities.
 
EPO Approach
  • Skeptical Stance on LLMs: The T 1193/23 decision (section 1.1.1) explicitly rejects LLMs as proxies for the skilled person, citing their opaque training data and query sensitivity. The EPO requires LLM evidence to be substantiated with technical literature, emphasizing human expertise.
  • Person Skilled in the Art: The EPO defines the skilled person as a hypothetical expert with general knowledge in the field, relying on prior art and standard practices. LLMs fail to meet this standard due to their lack of technical grounding (T 0206/22).
  • Procedural Rigor: The EPO’s Rules of Procedure of the Boards of Appeal demand robust evidence, as seen in T 1193/23’s dismissal of unsubstantiated ChatGPT responses. AI tools might be permissible as supplementary aids but not authoritative sources.
  • Policy Context: The EPO’s cautious approach aligns with its strict patentability criteria (e.g., Articles 54, 56 EPC), prioritizing technical contribution and clarity in claim interpretation.
USPTO Approach
  • Pragmatic Embrace of AI: The USPTO has embraced AI tools to enhance efficiency, as seen in initiatives like the AI-based Patent Search System. However, no explicit guidance equates LLMs with the skilled person, and their use in legal arguments remains untested in precedential cases.
  • Person of Ordinary Skill in the Art (POSA): The USPTO’s POSA, defined under 35 U.S.C. § 103, is similar to the EPO’s skilled person, requiring ordinary creativity in the field. Courts (e.g., KSR v. Teleflex, 2007) emphasize human judgment, suggesting LLMs would not qualify as POSAs without substantiation.
  • Flexible Evidence Rules: The USPTO’s Manual of Patent Examining Procedure (MPEP) allows diverse evidence, including technical publications and expert declarations. LLM outputs could be admissible if corroborated, as seen in reexamination or PTAB proceedings, but no case directly addresses this.
  • Policy Context: The USPTO’s focus on innovation, as outlined in its 2024 AI Guidance, encourages AI use in patent examination (e.g., prior art searches) but maintains human oversight. The guidance does not address LLMs in legal arguments, leaving room for flexibility compared to the EPO’s stricter stance.
 

                                                         Comparison and Policy Implications 

Comparison

  • Philosophical Difference: The EPO’s conservative approach prioritizes technical rigor, rejecting LLMs unless substantiated, while the USPTO’s pragmatic stance permits AI tools for efficiency, with less explicit restriction on their argumentative use.
  • Evidence Standards: Both require corroboration for AI evidence, but the EPO’s RPBA imposes stricter procedural hurdles than the USPTO’s MPEP, which allows broader evidentiary flexibility.
  • Impact on Practice: EPO practitioners must pair LLM outputs with technical literature, while USPTO practitioners may face fewer initial barriers but still need human validation to meet POSA standards.
  • Future Convergence: As AI adoption grows, the USPTO may adopt clearer guidelines akin to the EPO’s, especially if PTAB or Federal Circuit cases address LLM usage, potentially aligning with T 1193/23’s substantiation requirement.
Implications for Pending Cases
  • EPO: Pending cases might see increased scrutiny of AI-based arguments, requiring robust substantiation, as T 1193/23 sets a precedent for rejecting unsubstantiated LLM evidence.
  • USPTO: Pending cases may leverage AI tools for prior art searches or claim drafting, but legal arguments using LLMs will likely require human expert corroboration to align with PHOSA standards, especially in PTAB proceedings.
  • IPO: The Indian Patent Office, influenced by EPO precedents, will likely adopt a similar substantiation requirement, impacting AI-related cases by emphasizing technical evidence over AI outputs.