View all Webinars

Schwegman Lundberg & Woessner

Close     Close Mobile Menu

C.I.T. v. Hughes Comm. – Survival Guide for Software?

On November 3, 2014, in Cal. Inst. Of Tech. v. Hughes Communications., 2014 U.S.. Dist. LEXIS 156763 (C.D. Cal. 2014), Judge Mariana Pfaelzer penned the most thorough defense of software claims attacked under s. 101 that I have seen since State Street Bank. The opinion is also useful since it both continuously cites – and often distinguishes or explains Mayo—and because it is very critical of the analytical framework employed by the same court in McRO (Planet Blue) v Namco, a September decision on which I posted earlier. (A copy of this decision can be found at the end of this post.)

The heart and soul of the opinion is the Judge’s dismissal of the “point-of-novelty” approach that she finds was used in McRO, as opposed to “purpose” test that she applies in this opinion:

“McRO offers an interesting but problematic interpretation of s. 101…as requiring a point-of-novelty approach, in which courts filter out claim elements found in the prior art before evaluating a claim for abstractness….This Court finds this methodology improper for three reasons. The first reason is that the Supreme Court has held that novelty ‘is of no relevance’ when determining [patent-eligibility]…the Supreme Court did not revive this approach in Bilski, Mayo or Alice. Admittedly, Mayo does require courts to ignore [“conventional activity”] at step two, but neither Mayo nor any other precedent defines conventional elements to include everything found in [the] prior art….[C]ourts must follow the guidance of Diehr, which discourages courts from ‘dissecting a claim into old and new elements.’…[a]ccording to Alice, courts should not even consider whether elements are conventional unless the court determines that the invention is abstract at step one. Courts must filter out elements only at step two [of the Mayo analysis]….Finally…an improvement to software will almost inevitable be an algorithm or concept which, when viewed in isolation, will seem abstract. This analysis would likely render all software patents ineligible, contrary to Congress’s wishes.”

The test that the Judge finds in the precedent could be called the “purpose test”:

“In Diehr, the dispute boiled down to what the majority and dissent were evaluating for abstractness. The Diehr majority took the correct approach of asking what the claim was trying to achieve, instead of examining the point of novelty. Courts recite a claim’s purpose at a reasonably high level of generality. Step one is sort of a ‘quick look’ test,  the object of which is to identify a risk of preemption and ineligibility….After determining the claim’s purpose, the court then asks whether this purpose is abstract. Age-old ideas are likely abstract, in addition to the basic tools of research and development, like natural laws and fundamental mathematical relationships.”

After step one, the court is permitted to disregard conventional elements, “[that] may be one that is ubiquitous in the field, insignificant or obvious…. A conventional element may also be a necessary step, which a person or device must perform in order to implement the abstract idea.… However … conventional elements do not constitute everything in the prior art, although conventional elements and prior art may overlap…. A combination of conventional elements may be unconventional…. Courts should consider mathematical formulas as part of the ‘ordered combination’ even though in isolation, the formulas appear abstract.”

Judge Pfaelzer found that the purpose of the CIT claims is abstract—to code and decode data to achieve error correction—and, if written broadly, could threaten to preempt the entire field of data correction. However, in step two, the court focused on specific claim limitations and found meaningful limitations that represent sufficiently inventive concepts in the field –“such as the irregular repetition of bits and the use of linear transform operations.” While some of the limitations were algorithms, the court found them to be “tied to a specific error correction process.” The court concluded:

“These limitations are not necessary or obvious tools for achieving error correction, and they ensure that the claims do not preempt the field of error correction. The continuing eligibility of this patent will not preclude the use of other effective error correction techniques.”

In Mayo, the claim limitations of sampling and measuring metabolite levels were obvious tools for achieving proper dosing, but did the claim preempt the entire natural phenomenon of “too much drug – bad, two little drug, bad.” The Supreme Court certainly thought so, and I have characterized the claims as directed to an old use for an old compound. The issue of those pesky numeral limitations remains however, and I would pay a lot (if that were proper) to read Judge Pfaelzer’s “up close and personal” analysis of that decision.

Cal. Inst. of Tech. v. Hughes Communs._ Inc.

Share
Author:
Principal

  Back to All Resources