Friday, October 19, 2012

Perils of free software

My son, David, is a Stallmanite. This means that he has a religious attachment to free software. 

He recommended "Libre Office" to me for use on my new deskbook, since I did not want to pay for Microsoft Office. I used this software to work on my third novel last summer.

Now I'm back using Microsoft Word for Mac 2011 and trying to process the .rtf files generated by Libre Office.

I had previously noted last sumer that Libre Office deleted a lot of punctuation from the .rtif files coming from WORD, which was a huge problem. Now I am noticing that it inserted Chinese characters going back to WORD.  It particularly seemed to create a sequence that results in WORD for mac 2011 interpreting directional apostrophes and quotation marks as part of the succeeding character.

My son's passion for free software is undiminished and he believes I should abandon .rtf files in favor of open document files. He cites

http://diaryproducts.net/for/geek/microsoft_rtf_specification_nightmare

I have corresponded with Libre Office. They apparently lack Microsoft Word for mac 2011 and therefore cannot reproduce the problem.

On the one hand, I suspect that David is right that Microsoft is doing things to make its file formats indecipherable so that competitors cannot make software that reads them. On the other hand, I value compatibility more than idealism on this issue and feel annoyed with David for inducing me to take on this obscure office suite.

Addendum 11/3/12

My son persists in touting the benefits of free software and cites this:




Addendum 11/26/12

Someone claims that Libre Office 3.6.3 has corrected this problem.  I don't know.

Addendum 12/20/12

I had an old computer around running Windows that had died of a virus.  I had already paid twice to revive it, $200 a pop; and did not want to pay again.  We decided to install Ubuntu on it.  Then we wanted to add a wireless card, so we could interface with our wireless router.  We bought a netis WF-2117.  It doesn't work.  I contacted them.  They say they only support Windows.  Frustrating.

Tuesday, October 2, 2012

A copy of my comment from Judge Posner's recent blog


Here in the USA, we live in an environment where traditionally there have been patents and copyrights.  This legal tradition has encouraged a culture where innovation is valued, because innovators have reaped the rewards of innovation.  In cultures where innovation was not rewarded, innovation was not valued in the same way.  As a result of the culture established here, due to strong intellectual property protection, the USA has traditionally had the best science and engineering in the world.

Software people cite the early progress in the software industry, when patent protection was either not available or not sought, as evidence that patent protection is not necessary.  This is a fallacy.

First, those software people worked in this culture that had engrained encouragement for innovation that came from our history of intellectual property protection.

Second, there was quite early on a strong move for at least copyright protection to help out those innovators.  Early copyright cases quickly started talking about protecting "structure, sequence, and organization," to try to extend protection as broadly as possible.  This was a bit of a stretch in the law, but it showed a recognition of the importance of rewarding those who created economic benefit to the country.

Problems with software patents have continued because at first the United States Patent and Trademark Office refused to hire patent examiners with a computer science background, so poor art searches were performed.

More problems have been created because the original Supreme Court  case on this topic, Gottshalk v Benson, has severe logical flaws and a nonsensical result.  I encourage people to read my brief in the Bilski case for more discussion of that.  My brief is up on my blog.

The nonsensical opinion in that first case has resulted in extensive legal uncertainty and much litigation, leaving the entire field of patent protection for software unsettled for almost half a decade.

Have nots always want to take from haves.  People who don't have money want to take money from people who do.  People who don't have houses want to take housing from people who do.  People who do not have intellectual property want to take it from those who do.   I find this ethos repugnant.

Patents need to be whole heartedly endorsed by statute and the entire line of cases stemming from Gottshalk v Benson needs to be overturned.  

Moreover, the idea that mathematics is not an invention also needs to be overturned.  

Tuesday, September 18, 2012

Ernestine C. Bartlett

Just a quick note on the passing of my colleague, Ernestine Bartlett, obituary here

http://www.legacy.com/obituaries/lohud/obituary.aspx?n=ernestine-conner-bartlett&pid=159874162#fbLoggedOut

She, like me, was a patent attorney at Philips Electronics North America Corporation.  We worked together for at least 10 years.

She was always beautiful, elegant, warm, and charming.  She was a classic lady, the sort of person everyone liked.   In her legal advice she was well-informed, cautious, conservative, and by the book.

Gone too soon.

Friday, September 14, 2012

My Viennese grandmother's recipe for Linzertorte


Most of the time, if I can ever find Linzertorte in restaurants or bakeries – and it’s hard to find because real Austrian cooking is very rare in the U.S. — it’s not at all the right stuff.  It is very good, when made correctly.  If made with an ordinary pie dough, as I sometimes have had it, it’s just a cobbler.  A cobbler is not a Linzertorte.

For myself, I no longer make it.  It requires ground almonds in the dough, and no one in my family other than me is willing to eat nuts, and I don’t eat desserts at all any more, so I hope some of you out there will be able to enjoy it. 

Linzertorte

Please note measurements are given in weights in this recipe, which is common in European recipes.  The weights were originally metric, but my mother (who was a proper American WASP of the old school) converted them to English measures, so that we could make them in the United States.  The way we measured ingredients — which are commonly measured in cups in the United States — is that we would put a paper plate on a small postal scale.  Then the postal scale had to be readjusted to zero, with the paper plate on it, so that it would give correct readings.  Then we would sift or place the ingredients onto the paper plate.  Actually we used a separate paper plate for each ingredient that needed to be measured. 

Alternatively, if you had a digital postal scale, I suppose you would have to subtract the weight of the paper plate from the total weight in order to get the proper weight of ingredients.  Nowadays, a food scale can be zeroed down to ignore the weight of a plate.

I seem to recall that this recipe may actually have made more than one Linzertorte, possibly one and a half.  It may depend on the size of the pie plate or cake pan that you use or how thick you make the crust.

  • ten ounces all-purpose flour, sifted
  • ten ounces butter
  • 5 ounces granulated sugar, sifted
  • five ounces ground almonds
  • ½  ounces unsweetened or semisweet chocolate, grated  [I think Linzertorte does not necessarily have to have chocolate in it.  But I fail to see why anyone would want to skip the chocolate.]
  • ½ teaspoon of cinnamon
  • juice of a whole lemon
  • peel of the same lemon, using just the thin yellow part, not the white part, grated or hashed
  • 2 egg yolks
  • one jar fruit preserves, preferably raspberry or apricot

The chocolate must be semisweet or bitter.  On no account should milk chocolate be used.  The chocolate needs to be grated.

If the almonds are not already ground, they should be covered with boiling water.  Then let them stay awhile in the water until the peels come off easily.  Then grind them.

Cream butter and sugar together.
Then add eggs and seasonings.
Mix the flour in in parts

All these ingredients have to be kneaded  (My grandmother said either on the kitchen sideboard or on a special wooden board.  I’m not sure why she added this.  I don’t see why you couldn’t knead it on the table, but you might want to use wax paper under as it can get a bit messy when soft, while you’re kneading it.)

When you feel that the paste is ready  (well blended), you wrap it in cloth and put it in an icebox for at least two hours.  You can also make it the day before you bake.  Indeed, I seem to recall that, since the recipe made more than one Linzertorte, there was often a ball of dough left over in the refrigerator for a second one another day.

You need a rolling pin and a greased pie plate (My grandmother said “cake plate,” but her English was not so very good, so I suppose she really meant “pie plate.”  A pie plate was certainly what my mother used.)  Roll out the dough to about a quarter of an inch thick (actually my grandmother did not say how thick, but this is my recollection from when I made it), being careful not to let it warm up.  You can measure a piece that fits in the bottom of the pan, by tracing around bottom of the pan on the table with a knife.  Put the bottom in the pie plate.  Then you need to use a part of the dough to make a rim a little less than an inch all around the sides and sitting on the bottom part of the dough.


Then you spread the preserves all over with a knife.

On top of that, you put a grid made of dough, with each individual bar being about a finger wide, and thin, thinner than the bottom.  You start out with a long vertical piece across the middle of the pie plate making shorter parallel vertical ones to the sides.  Then you cut small strips to go horizontal.  The horizontal strips should not overlap the vertical ones, but should just be set between them.



Bake in a medium oven for about three-quarters of an hour. 

Normally this is to be served slightly warm or at room temperature, not chilled, but not piping hot either.

-------------------------

Please let me know if you made this or need help making it.  You can contact me on twitter @AnneBarschall or instagram @barschall

------------------------------------------------------------------

Addendum 230309:

I have found that one can take shortcuts and still get a great result.  I can microwave the unsweetened chocolate, rather than grating it.  I can buy raw almond butter, albeit not blanched, and avoid grating the almonds.  I can microwave the butter and then stir the sugar in -- rather than crushing it in.  This sort of thing saves a lot of time.

Also, I think it's best to make it in a 10" pie plate.  If I recall correctly, it then makes just one torte.

Saturday, August 4, 2012

Philosophical musings on the defects of physics

Contaminated with a mathematics that adopts the assumption of the existence of the concept of infinity, physics demonstrates that no physical reality corresponds with this assumption. The universe is large, but finite. Nothing is infinitely large. By the time one considers phenomena at the molecular level, and certainly below, matter and energy are quantized. Nothing is infinitely small.

How is this not a reductio ad absurdum?

Some historians assert that the original scientists departed from the concept of a loving, just, and merciful God. They felt that such a God would have to govern the universe in accordance with understandable laws -- the absence of laws being capricious and therefore tyrannical and unloving; the absence of understandability, analogously being capricious, tyrannical, and unloving. 

Departing from these assumptions of consistency, predictability and understandability, scientists began investigating a universe that they believed must behave in accordance with a mechanistic model, ultimately susceptible of mathematical modeling. 

The field of quantum mechanics demonstrates that at a fundamental level physical phenomena are random, and therefore arbitrary and capricious.

How is this also not a reductio ad absurdum?

When I was a physics student, my physics professor baldly stated that the Dirac delta, aka impulse function, had been proven by mathematicians not to exist; but that we were going to use it anyway, because it was useful. 

Negating infinity, but using geometry and calculus based on infinity; continuing to use the scientific model after having proven it false; using mathematics while baldly rejecting it -- is physics not, at its core, a dishonest topic?

When I was a T.A. for a physics course in college, I spent many hours trying to explain to an otherwise intelligent student why, in two dimensional Newtonian mechanics, we were going to break up motion into x and y coordinates. He failed to understand this concept. I came to the conclusion, not that he had a learning disability or was stupid, but that his intuition rejected this model. Indeed, it is odd to suppose that motion can be so decomposed.

You ask why women do not enter science. In the corporate world, women have often been whistle blowers -- pointing out egregious malfeasance by male superiors. Perhaps, in science, women look intuitively at the faulty reasoning that science is laced with and reject it as nonsensical. 


Also at http://tl.gd/im1u4r

Addendum 11/3/12:

I contacted my old physics professor and he denied saying that the Dirac delta had been disproved by mathematicians.  He said that mathematicians have used series of probability distributions to justify it, but he did say that Dirac himself used the function without adequate proof of its existence.

Saturday, May 26, 2012

Mayo v. Prometheus -- a critique


            This writing will focus on how Mayo v Prometheus[i] muddies the distinction between mathematical models and natural phenomena, between human invention and things outside of human invention; and generally rests upon dicta in drawing an unsupportable conclusion.

I There are no “Laws of Nature”
            First, I would like to discuss how the term ”law of nature” is an oxymoron.
            The court, in an unfortunate bit of obiter dictum, equate Einstein’s mathematical statement, E-mc2, with a “law of nature.”  This reasoning misconstrues the essence of science.
            Science involves creating hypotheses, designing observational protocols, performing the protocols, collecting the results, and verifying whether the results tend to confirm or controvert the hypotheses.  Frequently, the hypotheses take the form of mathematical models.
            Mathematical models are useful, because they allow prediction in a generalized fashion.  If confirmed, they enable prediction of a larger number of real phenomena than would a single observation or a set of observations.
            Confusing mathematics and the occurrences being observed is a common error of reasoning.   This error begins with very small children, when we teach them to count on their fingers.  We tell them they have five fingers on each hand.  Many of them come to believe that the number five exists in their hands, when in fact the number is a product of human thought.
            If we look more carefully at each finger, we see that no two are the same.  Each is a unique creation, with its own print, slight differences in shape and angles of joints, differences in mechanical function, and differences in control structure as well.  We designate the number five to describe how we perceive these fingers.  This perception is a characteristic feature of many human minds, not so much of the fingers.
            There are people who have little or no ability to perceive the world in a mathematical fashion.  We think of these people as having a learning disability.  We give them bad grades in mathematics in school.
            Nevertheless, these people may have many other cognitive abilities that show them to be intelligent.  Curiously, despite the obvious intelligence of some of these people, we think of them as having something wrong with them, as opposed to concluding from their very existence that mathematics exists only inside the heads of some people.
            E=mc2 is no law of nature.  It is a mathematical model, invented by a human being, which has proved useful.  Like all mathematical models it could be modified or discarded subsequent to further observation.  Nature, by contrast, will not go away with the model, should the model happen to discarded.
            Scientific theories need not be expressed in mathematics.  For instance, the theory of evolution is expressed in words.  While I personally happen to believe in evolution, many people do not.  They love to point out that this theory could always be overturned in the face of further evidence, which is absolutely true.  Similarly to mathematical models, verbal theories do not exist in nature.  They are ideas that people have about nature that describe the results of observation.
            In general, nature has no “laws.”  People devise laws to help them describe what they observe about nature.  Personally, I see no reason why these human inventions should not be patentable, even though they are mathematical and human thought; however they have nothing to do with the claimed invention in the case at hand.
            It is lamentable that the Supreme Court finds necessary to insert dicta on this subject, and has done so repeatedly.

II Natural Phenomena Are Not Novel -- and serve therefore as poor examples
            The Supreme Court also gives the example of a new plant or mineral discovered in the wild as not being patentable subject matter.  
            These are examples of phenomena of nature rather than “laws.”  
            Moreover, existing phenomena are not novel.  The fact that humans might not have previously known of them does not mean they did not exist.  These phenomena are therefore poor examples to reason from as they are in any case inherently unpatentable.  Referring to them is also lamentable dicta in the area of patentable subject matter.
            These existing phenomena examples are in some sense the converse of the discussion of mathematics and natural laws above.  The fact that mathematics or scientific theories exist in the minds of many people does not mean that these exist outside the human perception of them.  Even the writing of mathematics or theories on paper is not mathematics, absent the ability of the mind to perceive what has been written.  Similarly, the lack of perception of a mineral or plant does not prove its non-existence, nor does the sudden discovery of that same mineral or plant make it new.[ii]
            Natural phenomena exist in nature.  In order for humans to invent something, it must not exist in nature.   
            Reasoning based on flawed understanding of the distinction between scientific theories and natural phenomena is bad enough; but worse, still, these examples seem to have absolutely nothing to do with the case at hand and therefore do not serve as the basis for any legitimate reasoning.

III. The claimed invention is not a “law of nature” or natural phenomenon
            The court says that the claimed invention in Mayo v. Prometheus is a law of nature, but that is not so.  
            The claimed invention relates to the human administration of a man made chemical, a drug.  The drug is administered as part of a claimed process, not by nature.  Doses are adjusted based on responses of a natural system, but the responses are not natural, because they are the result of the administration of the drug, which is not a natural phenomenon. Therefore, the consequences of the administration of the drug are not a “law” of nature, but the responses of natural systems to human intervention.
            Let us consider the examples of: downhole equipment, which determines the existence of natural phenomena such as presence or absence of petroleum; and industrial inspection equipment, which determines whether manufactured goods comply with predetermined criteria.  In such cases, a discovery might be made that measuring in accordance with certain criteria would yield useful information.  
            Assuming that the measurement equipment is not new, but only the criteria are new, I do not think that previously anyone would have supposed that such a new use for an old machine would fall outside the domain of patentable subject matter.  Instead, the only question that might have presented was whether the new use was really non-obvious.
            This case therefore has broad ranging unanticipated implications.
            I find disturbing, that the Supreme Court has so frequently come down on the side of unpatentable subject matter.  This creates perpetual uncertainty in the law and their reasoning does not hold up to scrutiny.
            In any case, I find that the reasoning again fails to support the conclusion.

IV. Misguided amicus briefs?
            Apparently, the court was influenced by the submission of amicus briefs from people in the medical field alleging that the patent in question was going to inhibit research.  I have not read these briefs, only the court's characterization of them.
            This particular patent is narrowly framed.  It is difficult to see how it could have a significant negative influence on research in general.   The conclusion that such a narrowly framed patent could have influence on the progress of the broad field of research is another example of obiter dictum.
            In general, one can find non patent holders wishing that they could practice claimed inventions.  Similarly, one finds people without large amounts of money wishing they could be rich.  Does this mean that the court should give just outright give the “have-nots” what they seek?

V. Conclusion
  1. There is no such thing as a law of nature;
  2. Natural phenomena are not novel, therefore any discussion of their falling within the domain of patentable subject matter is dicta;
  3. No natural phenomena are claimed in this application;
  4. The idea that this patent would have an inhibitory effect on research is far fetched.
I find, therefore, that the reasoning in this case is so flawed as to be completely incapable of supporting the conclusion drawn.  Perhaps we can get Congress to overturn the thing?

[this blog was edited Dec. 1, 2012]



[i] decided by the Supreme Court of the United States on March 20, 2012 No. 10-1150
[ii] I gather that there are people who believe that if a tree falls in the forest and no one hears it then it did not make noise.  I am not one of those people.  

Tuesday, April 24, 2012

Nuance v. Bright Line, Philosophical Musing on the Anti-Common Law Bent Of the CAFC in Patent Cases

I wrote this paper before the Bilski decision, and have made some updates, but it does not include the most recent round of cases between the Federal Circuit and the Supreme Court.

Nuance v. Bright Line, Philosophical Musing on the Anti-Common Law Bent
Of the CAFC in Patent Cases
By Anne E. Barschall[i]

            The patent law community has remarked broadly on the propensity of the Court of Appeals for the Federal Circuit (“CAFC”) to adopt bright line tests, and the unusual interest the Supreme Court of the United States has taken recently in overruling such cases.[ii]  Patent cases are not generally considered very sexy, so in the past the Supreme Court has not been so eager as recently to take on such cases.  Patent attorneys have not tended to look at this dispute between the Supreme Court and the CAFC from a basic legal method or philosophical point of view. 
            This article began during a luncheon presentation by Judge Dyk of the CAFC in January of 2004[iii].  That court was designated by Congress as the one to hear all patent appeals, whether from the United States Patent and Trademark Office (USPTO) or from the Federal District Courts[iv]
The subject of Judge Dyk’s presentation was how to make obviousness decisions in patent law more predictable.  The judge expressed dissatisfaction with the concept that obviousness be determined by the gut feel of the trier of fact.  He wanted a simple, predictable, bright line test.
            During the course of this presentation, Judge Dyk opined that, when the Federal Circuit is sitting en banc, making important policy determinations that will affect all patent law decisions and presumably will result in general pronouncements of law, it would be very helpful for interested parties to submit amicus curiae briefs.  He said that he found that such briefs were most helpful if they spoke of commercial and practical considerations that might result from decisions in one way or another.  He disparaged briefs that spoke extensively of legal method or requested the court to restrict their deliberations to the facts before them.
            Then, on December 13, 2010, Judge Rader gave a presentation[v] mostly on other topics, but at the end was asked whether he thought -- in light of the history of the Supreme Court reversing the Federal Circuit on bright line decisions -- the Federal circuit was going to persist in creating bright line tests.  Judge Rader, in return, posed the rhetorical question of whether the public ought always to have to consult with a judge in order to know what the law is or whether the public would not be better off having clear guidelines.
            Subsequently, on April 27, 2011[vi], Judge Gajarsa gave a presentation in which he opined that, in consistently overruling CAFC decisions, the Supreme Court is undoing the Congressional mandate for the Federal Circuit to provide a forum for unifying the patent law of the United States[vii]
            There are some basic philosophical considerations at play here.  Many of those have been debated by scholars of legal method in the past.  The author feels, however, that she has some different perspectives on this in light of her life experienceThis experience has included a) artificial intelligence techniques learned through writing patent applications and conferring with inventors, b) legal method and comparative law courses at Columbia Law School, and c) studying Asperger’s Syndrome subsequent to diagnosis of family members with this neurological condition.  She has found a number of common and interwoven themes in all of these areas relating to how people feel decisions should be made.  Tension between “determinative” and “good enough” thinking pervades decision-making strategy in all of these domains.  The thesis of this paper will be that the Federal Circuit should become more comfortable with the rubric of “good enough” thinking at the expense of complete predictability and bright line tests.
            First this paper will discuss some of the history of the different approaches adopted by the CAFC and the Supreme Court.  Second, aspects of neurology will be introduced.  Third, some basic legal method principles will be summarized.  Third, some artificial intelligence approaches to decision making will be discussed.  Fourth, a few articles about judicial decision making will be touched upon.  Lastly, conclusions with respect to the desirability of the common law approach will be drawn.

The CAFC and Bright Line Tests
            The statements Judge Dyk advanced in 2004 sounded a great deal as if the Federal Circuit, sitting en banc would undertake to make legislation in unclear areas of the law – and that lobbying efforts would be helpful.  This seems to run contrary to the historical division of powers set forth in the United States Constitution, which established tri-partite government.[viii]  Moreover, bright line tests are not the common law approach. The statements of Judge Gajarsa in 2011 sounded a great deal as if he had over-interpreted legislative mandate to unify patent law as authorizing the CAFC to throw out the common law and the final authority of the Supreme Court.  While these are very staid, well-educated men speaking in the soporific tones of senior legal scholars at after lunch presentations, their statements seem to put them in a position of full scale, hot-headed rebellion against both the common law and the constitution.
            In fact, the Federal Circuit has shown a tendency to formulate bright line tests[ix] expressed in repeatable sound bytes, rather than building law slowly through legal method.  Some examples of this can be found in the cases of Festo[x], KSR v. Teleflex[xi], and In re Bilski[xii].

FESTO
            The first case, Festo, related to the topic of prosecution history estoppel, also known as filewrapper estoppel.  Filewrapper estoppel is a patent law doctrine that holds that subject matter given up during patent prosecution may not be brought back to life during litigation[xiii]  In this case, the Federal Circuit said
When a claim amendment creates prosecution history estoppel with regard to a claim element, there is no range of equivalents available for the amended claim element. Application of the doctrine of equivalents to the claim element is completely barred, during litigation.  [xiv]
This was a new formulation of the law, not previously appearing in any cases.
               This policy made patent prosecutors very nervous, because the simplistic test would give heightened weight to slight misstatements during prosecution, potentially unnecessarily limiting patent scope and exposing patent attorneys to malpractice liability.  Those who were or feared being accused of patent infringement tended to like the test, because it made it easier to get out of paying damages under a patent. 
               The CAFC Festo slip opinion brazenly began with 17 pages of obiter dicta, which was a review of patent law on the subject of Doctrine of Equivalents, formulated by one of the parties, and totally divorced from any consideration of the facts of the case.  Only after this extensive “review of the law,” did the court venture into the business of the facts of the case at hand.  The facts of the case were not tied to the stated conclusion, not stated to mandate a change or restatement of the law.  Instead, the court purported to want to formulate a new test that would make litigation simpler.  This abstract statement of the law was a kind of legislation, whereas common law would restrict judge’s opinions to reasoning from the facts of the case before them.
Ultimately, upon review, the Supreme Court reversed the decision in Festo in favor of a fuzzier standard[xv].  In their opinion, the higher court began, not with a review of the law, but a review of the facts.  Then they concluded -- after critiquing inflexibility, and changing laws and expectations midstream -- that:
There are some cases, however, where the amendment cannot reasonably be viewed as surrendering a particular equivalent. The equivalent may have been unforeseeable at the time of the application; the rationale underlying the amendment may bear no more than a tangential relation to the equivalent in question; or there may be some other reason suggesting that the patentee could not reasonably be expected to have described the insubstantial substitute in question. In those cases the patentee can overcome the presumption that prosecution history estoppel bars a finding of equivalence. 535 at 740
The Supreme Court therefore embraced a more traditional legal method type approach to the law, than did the CAFC, that method requiring review of the facts and consideration of complex subtleties.

KSR
            Prior to the Supreme Court decision in  KSR v. Teleflex, the Federal Circuit had formulated a bright line test relating to combining prior art references with respect to determinations of obviousness.  In this test, in order for an Examiner or litigant to be able to combine two references,[xvi] there would have to be some explicit suggestion in some piece of prior art that would lead to such a combination[xvii].  Patent prosecutors[xviii] liked this test, because patents were easier to obtain.[xix]  Accused patent infringers did not like it, because the test made patent invalidation more difficult. 
               In the CAFC case, Teleflex, Incorp. & Tech. Holding v. KSR  Int'l., 04-1152 (Fed. Cir. 2005), the district court’s finding was held to be defective because
Under our case law, whether based on the nature of the problem to be solved, the express teachings of the prior art, or the knowledge of one of ordinary skill in the art, the district court was required to make specific findings as to whether there was a suggestion or motivation to combine the teachings of Asano with an electronic control in the particular manner claimed by claim 4 of the '565 patent.[citation omitted] That is, the district court was required to make specific findings as to a suggestion or motivation to attach an electronic control to the support bracket of the Asano assembly.[xx]
Again, on review, the Supreme Court went for a fuzzier test.
Helpful insights, however, need not become rigid and mandatory formulas; and when it is so applied, the TSM test is incompatible with our precedents. The obviousness analysis cannot be confined by a formalistic conception of the words teaching, suggestion, and motivation, or by overemphasis on the importance of published articles and the explicit content of issued patents. The diversity of inventive pursuits and of modern technology counsels against limiting the analysis in this way. In many fields it may be that there is little discussion of obvious techniques or combinations, and it often may be the case that market demand, rather than scientific literature, will drive design trends. Granting patent protection to advances that would occur in the ordinary course without real innovation retards progress and may, in the case of patents combining previously known elements, deprive prior inventions of their value or utility…[xxi]
Rigid preventative rules that deny fact finders recourse to common sense, however, are neither necessary under our case law nor consistent with it. [xxii]
            Hence, in the specific area that Judge Dyk originally spoke of hoping to clarify, the Supreme Court rejected the concept of bright line tests and instead embraced the concept of “common sense,” in other words gut feel.
            Indeed, gut feel has been a hallmark of the common law for centuries.  An example of this, known to all lawyers, is the “reasonable man” standard in torts.

[xxiii]

  What is a “reasonable man?” What is “obvious?”  These are mixed questions of law and fact, areas where language fails, where intuition prevails, where we look to juries to take the matter off our hands, because we would rather not have to decide ourselves.

Bilski
            In the recent case of In re Bilski[xxiv], the Federal Circuit again formulated a simple, bright line test, this time with respect to patentable subject matter.  This test has been called this the “machine or transformation” test. 
The Federal Circuit had previously raised quite a ruckus in the case of State Street Bank[xxv], by making broad statements in dicta that business methods should not be excluded from patentable subject matter[xxvi].  Before that, it had generally been believed that business methods were not patentable.  The statements were dicta, because the claims before the court recited business methods implemented on a computer – not business methods in the abstract carried out by people not using machines. 
The broad dicta in this case resulted in Congress modifying statute[xxvii] to protect business people who had pursued trade secret protection for business methods, because those business people thought that business methods were unpatentable.  Moreover, seminars were held all over the country about this apparent change in the law, and patent prosecutors started formulating claims to business methods carried out by people unassisted by computers.
One such patent application was the subject of the In re Bilski case.  This patent application recited a business method relating to hedging trades.  No apparatus was recited.  In its opinion, the Federal Circuit retreated from the broad statements in State Street Bank and formulated another simple, bright line test, namely that patentable subject matter must recite a machine or a transformation of matter.  This test made many people in the financial industry happy, because they did not want to have to worry about patent infringement.  The test made others unhappy – and was criticized as having unanticipated consequences, for instance with respect to non-destructive industrial testing methods[xxviii] 
Moreover, the test illustrates the unfortunate nature of broad dicta because of the apparently wasted efforts that followed the now partially repudiated State Street Bank decision.
The Supreme Court[xxix] had a divided opinion, whose exact interpretation cannot be precisely determined; however a majority of the judges clearly rejected the idea of bright line tests to determine patentability, while affirming the result that the application in question did not recite patentable subject matter[xxx]

Summary of Festo, KSR & Bilski
All of three of the cases discussed above illustrate the attitudes expressed by Judges Dyk Rader, and Gajarsa in their presentations before the NYIPLA and JPPCLE, to wit that bright line tests are desirable in unifying patent law.  The cases also illustrate the commitment of the Supreme Court to a more nuanced, common law approach.

Neurology
            The author has had family members diagnosed with Asperger’s Syndrome[xxxi].  This is a disorder that was only added to DSM IV — the handbook of psychiatric disorders used by psychiatrists, psychologists, and insurers — in 1995 – and is to be removed in DSM-V.  It is one of several psychiatric disorders that have become popular diagnoses, especially of children, in recent years.
            Asperger’s Syndrome is a mild disorder on the autistic spectrum.  People with this disorder are often measured as very intelligent when given conventional IQ tests, but they exhibit curious deficits in social functioning and also in how they think.  Because of their high IQ’s, mastery of obscure trivia and academic style of speaking, people with mild autistic spectrum disorders often experience great success in academia.  For instance, some suspect that Albert Einstein suffered from an autistic spectrum disorder[xxxii].  The more successful a person is in academia, the more a person is regarded as “smart,” a feature that tends to get people placed in positions of responsibility.  Being “smart” in an academic way may not necessarily indicate good common sense, though.
            One deficit in thinking amongst those with autistic spectrum disorders is “deterministic” thinking, rather than “good enough” thinking.  This deficit has been well described by Dr. S. Guttstein[xxxiii] 
Dr. Guttstein tells the informative story of one of his patients who was admitted to an Ivy League college, but could not make very simple decisions.  The young man was expected to unpack a large stack of boxes in his dorm room.  He was paralyzed by this expectation, because he could not decide which box to open first.  He ended up calling Dr. Guttstein in the middle of the night after standing in front of the boxes for four hours. 
            Because the young man was a deterministic thinker, he wanted a perfect solution to the problem of the order in which to open the boxes.  Finding no such solution, he could not do the expedient thing and just start opening boxes at random.  This expedient thing would be “good enough” thinking, and would be the solution adopted by a neurotypical thinker, in other words a person without an autistic spectrum disorder.  The neurotypical person would likely just pick some box and start unpacking, knowing intuitively that there was no solution to the problem of which box to unpack first..[xxxiv]
            The deterministic thinker, due to perfectionism in the thinking process will in this manner often face an inability to deal with ordinary life problems.  The seeking after a perfect solution leads to paralysis.
            An example of a concept similar to “good enough” thinking is called “satisficing.”  This term was developed by Herbert Simon and came from combination of the words satisfy and suffice.[xxxv]
            Obviousness in the patent law — like the reasonable man standard in negligence law or the famous standard for pornography “I know it when I see it”[xxxvi] — lends itself well to a gut feel or intuitive approach. 
One classic example was the porcelain doorknob case.[xxxvii]  In this case, it was decided that a doorknob made of porcelain was not patentable, because it was obvious to make doorknobs out of porcelain once one had seen doorknobs made of other materials.  The court in question applied a sound byte to describe why this invention was obvious.  They said “no other ingenuity or skill being necessary to construct the knob than that of an ordinary mechanic acquainted with the business, the patent is void;”  but what made them feel that no other ingenuity or skill was necessary?  Inherently, some intuition or common sense was involved in this decision.
Another formulation of why decisions were made under obviousness was the “flash of genius” test[xxxviii], where the courts asserted that in order for an invention to be non-obvious it must have involved a flash of creative genius.  Again, though, some intuition on the part of the courts must have gone into threshholding what was a flash of genius and what was not.
In some sense, any verbal formulation really begs the question of whether the words of the test in question are what guided the finder of fact as opposed to the finder’s intuition or gut feeling about what is obvious; whether words are actually capable of expressing the mental process that the finder of fact is following. 
Recent neurological research seems to be showing that the unconscious mind makes decisions first with the conscious mind later becoming aware of that decision[xxxix]  – and then presumably developing a rationale for why the decision was made.  This neurological discovery would appear to attribute to the conscious brain a sort of delusional egomania, believing it is in control of decision making, when in fact it is not.  It only becomes aware, i.e. conscious, of decisions after they are made.
Preference for bright line tests in legal decision making, as manifested by CAFC decisions discussed above,  and by comments of individual judges, would appear to indicate discomfort with intuition, common sense, or “good enough” thinking  – and affinity for deterministic thinking. 

Legal Method and Comparative Law
            The author studied these topics over 25 years ago at Columbia Law School[xl], under the tutelage of Professors Arthur Miller and George Berman, respectively.  The following is a distillation of some concepts she took away from those courses.  These topics are well-developed by other authors and it is beyond the scope of this paper to give a complete review.  Instead, the writer hopes to give a flavor for some basic concepts in order to show patterns of thinking that carry over into other areas.
            Legal Method grew up in the British courts, starting in the late Middle Ages.  At first, a judge was someone who made decisions on disputes.  At some point, it was thought to be helpful if the judge would write a few words about why the decision went one way or another.  Later judges should try to use the reasoning of earlier judges, so that the law would be consistent, unless the facts were different.
            In trying to decide whether the reasoning of a prior case was controlling or not, a later judge should consider whether the prior court was the same court, a superior court, a lower court, or a parallel court, with the last two not being controlling, only persuasive.  In addition, the later judge should consider whether statements in the earlier opinion were “holding,” which meant that they were necessary to the decision and therefore controlling, or “obiter dicta,” which meant that they were only persuasive.  Moreover, the later judge should look at the facts to see if the reasoning applicable in the prior case was applicable in the later case. 
The quality of a lawyer was to be measured in large part by his or her ability to manipulate the subtle knife of differentiation, by persuasive categorization of prior court statements as dicta rather than holding and also by distinguishing facts of one case from another.[xli]  Such differentiation was difficult for most people to achieve, giving rise to one of the causes of the high price of legal representation.
            An interesting aspect of traditional law school education is the law school exam, which contains questions not susceptible of an unambiguously true answer.[xlii]  A desired answer must be chosen, then reasoning advanced to support that answer, possibly in accordance with a hypothetical client point of view.  Similarly, in legal practice, the hope is to come up with a legal way of reaching the client’s desired goals by constructing a path of reasoning, through the maze of law, which reaches those goals.
            Professor Berman introduced to the author the subject of codification, as a different way of approaching the law, in the course on Comparative Law.  He said that Napoleon invented this concept because he wanted the law to be easier to understand.  He wanted the law to be something like a telephone directory that everyone could have at home, so that everyone could learn the law without having to pay expensive attorney fees.  Many countries in Europe were very taken with the Napoleonic codes and still try to go with them.
            Professor Berman pointed out, though, that lawyers and litigation were not eliminated, despite this noble effort.  Instead it was discovered that somehow no one could write a code that completely anticipated every fact situation – or which could not be manipulated using the subtle knife of differentiation.  There was always a new case that arose that would fall into some grey or ambiguous area of the code.
            Coincidentally, around the time that of taking that course, the author also read the book D. R. Hofstadter, Gödel, Escher, Bach (1999), which tried to explain Gödel’s theorem about the incompleteness of mathematics in terms that possibly a layman might hope to understand.  It struck the undersigned that there was something very similar about the incompleteness of mathematics and the incompleteness of the Napoleonic Codes.  Somehow humans could not formulate by means of mere logic, language, and reason, a complete prior determination of how to make decisions. 
            Another similar concept arises in classical Judaism, where the name of the deity is considered unpronounceable[xliii] and in Taoism, where it is said, when referring to the Tao “The name that can be named is not the constant name”[xliv]  Humans are limited.  We cannot describe everything.  We cannot predict everything.  We cannot write so perfectly as to eliminate unpredictability in judicial decisions by mere effort of drafting, no matter how much we might wish it.
            It seems that the distinction between “good enough” and “deterministic thinking” drawn with respect to Asperger’s Syndrome is also applicable in the law.  The common law empirical way of decision making seems more like a “good enough” solution, something that gives an answer in a quirky way; while codification seems more deterministic, giving an answer that seems more like a closed analytical framework.

Artificial Intelligence
The author has had the privilege of working on patent applications relating to a number of artificial intelligence based inventions.[xlv]  In artificial intelligence, practitioners use a combination of electronic hardware, software, knowledge of past artificial intelligence attempts, and knowledge gleaned from experts to attempt to replace, assist or even better an expert in performance of expert tasks.  Artificial intelligence techniques include decision-making algorithms that seek to allow machinery to come to correct determinations.
The author does not pretend to have formal training in artificial intelligence, but has found some of the approaches she learned about in preparing patent applicants to be interesting from the perspective of looking at Federal Circuit bright line decisions, the deterministic/good enough decision making styles, the legislative/judicial distinction, and the codification/common law distinction.  Different practitioners in the field of artificial intelligence seem to have preferences for different types of algorithms, in other words different decision making styles, just as judges seem to have preferences for different styles.
Two patents that the author worked on will now be considered.  These are U.S. Pat. No. 6,601,053 (“Schaffer et al.”) and 6,604,005 (“Dorst et al.).  These two patent documents deal with three different artificial intelligence techniques.  These are called “neural nets,” “genetic algorithms,” and “budding,” the latter also being referred to as A*.   Neural nets and genetic algorithms remind the author more of “good enough” thinking as described by Dr. Guttstein, while A* reminds the author more of “deterministic” thinking, also as described by Dr. Guttstein. [xlvi]
The author will now attempt to describe these techniques with some trepidation relating to technophobia in the general legal audience.

BUDDING
Budding involves representing a problem as a space within the computer, with each position within the space representing part of a solution to the problem.  For example, a problem --  together with a representation -- is shown below in figures 1a[xlvii] and 1b[xlviii].

These are only a small sample of the many figures in the patent.  The discussion here will not attempt a complete explanation of the algorithm of the invention, which would be better understood with reference to the original patent.  The effort here is only to give a flavor of what was involved, that any given problem might have a very peculiar internal representation. 
In this patent, the representation space, e.g. Fig. 1b, is searched methodically to find an optimal solution.  Arrow masses 2805 and 2806 show a stage in that search  As the algorithm proceeds, the arrows gradually expand to fill the space – ultimately yielding an exhaustive search, so that the final solution can be taken to be optimal. 
While the solution may look mysterious to the uninitiated, in fact there is a very clear, logical way in which the solution technique relates to the problem to be solved – and the search is exhaustive, and therefore, in some sense, deterministic.
In other words, this algorithm might satisfy the befuddled student with Asperger’s Syndrome unpacking boxes described above.  If a proper representation of his box unpacking problem were to be devised, it could be searched with A* to yield the best solution that he hoped for.  He might then be absolutely guaranteed that he would not unpack his books before he found the disassembled pieces of the book case he hoped to put them in, for instance – or perhaps he would be guaranteed that he would find his change of underwear that he needed the next day before unpacking the text books from last semester, which he might not need again for many months, if at all.
            The search for a provably optimal and complete solution to a problem also bears a resemblance  to the Napoleonic attempts at codification, the idea that a solution might be found that would eliminate further searching, that would in some sense be complete or final.  The certainty, predictability and completeness of the solution here further reminds the author of Judge Dyk’s desire for finding a test for obviousness that would be more predictable, in other words prevent litigation by helping litigants foresee outcomes.

GENETIC ALGORITHMS
            Genetic algorithms fall into a less certain way of coming to some kind of problem solution.  In this approach, each proposed solution to a problem is represented as a string of computer characters[xlix].

 
The algorithm treats these strings like DNA.  Each number is regarded as a piece of a “chromosome.”  The chromosomes go through a mating process similar to what would happen to
DNA in biological cells



The algorithm cuts the strings as shown in Fig. 2b[l] and crosses the portions from different strings as shown in Fig. 2c[li] to yield two new strings, representing two new proposed solutions to a problem.  .  The example here is fairly simple, with short strings and only one crossover point.  Practical examples might be much more complicated.
The resulting strings, i.e. the new representations of a problem to be solved, can then be tested with respect to some criterion to see which represent better solutions to the problem.  The better ones are retained and the lesser ones are discarded, analogously to Spencer and Darwin’s “survival of the fittest”[lii] concept.  This process is repeated until some threshold is reached. 
            Genetic algorithms do not pretend to find a best solution to a problem.  They only provide “better” solutions – and then only in some problems and not others.  The crossover points chosen might bear little or no relationship to the problem to be solved.  The resulting crossed over strings might in some cases give rise to a solution that is completely dysfunctional.  Whether the solutions are improving due to the mating type process is something that can only be determined by experimental testing.  Yet, researchers in this field find that sometimes solutions are improved by the genetic process.[liii]  
            For those who believe in evolution, this process would appear to be similar to the method of improving life forms to be found in nature.  In fact, in nature, many organisms appear that are not at all viable, that are stillborn or die quickly.  Yet, over time, organisms come into being that do survive and even thrive. 
            The genetic algorithm process differs from the A* process in that the internal representation of the problem, the chromosome-like string, need not be clearly related to the structure of the problem to be solved.  The only real proof that the process works is empirical, based on experiments.  There may be no logical justification for the result.
            This process could not be very satisfying to the deterministic thinker with his boxes in the college dorm room.  Indeed, the appearance of dysfunctional solutions could be very distressing.  Yet, over time, some “better” or “good enough” solutions often manifest with genetic algorithms.
            This process could not be very satisfying to Napoleon either, since it involves repeated trials to come to a solution, is not very predictable, and sometimes does not work.
            Presumably, also, this type of process would not be satisfying to legal scholars, like Judge Dyk, who seek predictability.

NEURAL NETS
            The Schaffer et al. patent used genetic algorithms to design another artificial intelligence type, the neural net.  The neural net attempts to mimic the way some scholars think neurons operate in the brain.

            The operation of neural nets is rather mysterious.  The operations performed by the nodes[liv] are adjusted until the network gives some desired set of outputs in response to some desired set of inputs.  The designer then hopes that the network will give desirable outputs in response to new inputs, as yet unknown.  There is no particular logical or provable reason why particular settings and configurations of the neural nets may result in a problem solution, at least no reason necessarily apparent to the designer.  The proof that the neural net works is again empirical, not the result of “proof” or “reasoning.”
This is thought to be similar to what one does in raising children.  When one chastises a child for hitting his brother, one hopes that that child will also not hit his classmates in school or his colleagues at work as an adult.  One does not in fact know that the child will be able to generalize knowledge gleaned from this procedure to other situations[lv].  Still, this form of indoctrination seems to work in some circumstances, so one keeps trying it.  There may be neuroses that develop.  For instance, the child may end up fearing conflict so much as to be dysfunctional in some situations. 
Similarly, artificial intelligence type neural nets have been shown to develop some generalization[lvi] and some false responses to new inputs.  Nevertheless, just as the parent does not know exactly what aspects of parenting may give positive results in adulthood or why, the engineer designing and programming neural nets does not have a deep understanding of what is happening during training of the neural net.  .
            Again, the neural nets do not present a deterministic solution to any given problem, but may give a “good enough” solution to some problems.  The student so distressed about his box-unpacking problem would not feel comfortable with this – the idea that a solution pops out; it might not be the perfect solution; it might not be a good solution at all; and yet it is a solution – and some solution is better than no solution.
            The field of neural nets treats that thinking/decision-making process as a black box.  One does not have an analytical formulation of why these networks sometimes succeed – no proof.  Instead, there is only empiricism.
The idea that the human brain similarly operates as a sort of mysterious black box, not at all in accordance with what we think of as reason and logic, is increasingly accepted in the field of neurology.  Moments of daydreaming and procrastination are actually times when the non-conscious part of the brain is at work, the work that leads to the useful output of the brain.  These black box moments – eureka moments, moments of intuition – are in fact the heart of human genius. [lvii] 
Historically, common law legal thinkers have been doing this very sort of thing with respect to mixed questions of law and fact, like the reasonable man standard in negligence, the definition of pornography, and the obviousness standard.  The long-standing preference of judges to give reasonable man questions for decisions by a jury indicates a preference for a black box type solution, like the scientists who propose neural net type decisions.  One does not know why a particular jury might find one decision to be negligent and another decision to be a simple mistake, and perhaps one does not want to know.  The idea is to get a determination, and use that determination to resolve a conflict. 
This black box approach to decision making seems to be exactly what Judge Dyk was hoping to avoid in expounding the virtue of bright line tests as early as 2004.

Summary of artificial intelligence techniques
            The three artificial intelligence methods discussed above give some examples of how scientists are trying to model decision-making.  As with judges, different scientists have preferences for one approach over another.  Some prefer more deterministic solutions, while others prefer good enough type solutions.
            In fact, though, experts in artificial intelligence have discovered something now called the “no free lunch theorem.[lviii]”  This theorem has been summarized as saying “When all functions f are equally likely, the probability of observing an arbitrary sequence of m values in the course of optimization does not depend upon the algorithm.”  This casts a certain aura of interchangeability on all search algorithms, such as those mentioned above.  While certain ones will solve certain problems more quickly, on average, given all problems, they can be thought of as averaging to equal merit.

Theories of Judicial Decision-Making
            It is beyond the scope of this paper to review all prior articles on the topic of judicial decision-making.  Only a few recent ones will be considered.
One prior author, Dan Simon, has written extensively on coherence theory as applied to legal decision-making.[lix]  He diagrams decisions, showing which propositions of facts and law support which conclusions.  Then he shows how decision makers, both judges and juries, discard some of these propositions, in order to achieve confidence in a conclusion.  Which propositions are discarded depends upon hunch/intuition/prejudice.  Simon seems to find this disconcerting -- especially in view of the tendency of decision makers to express their decisions as very certain and fail to address the many propositions they might have discarded when writing opinions.  Simon proposes remedies for this.
Other authors have examined the role of intuition/hunches/ideology/bias/culture as opposed to deliberation in legal decision making,[lx] with the conclusion that the former does play a significant part.  Again, though, a frequent feeling seems to be that intuition is unreliable and undesirable.  This feeling seems to come first from past instances of bigotry and also from various testing situations where scholars have shown that intuition can be tricked and result in demonstrably wrong decisions.
In view of this tendency of legal scholars to feel that intuition is undesirable, it is perhaps not surprising that Judge Dyk finds intuition to be similarly undesirable in a test for obviousness; and that the Court of Appeals for the Federal Circuit finds that bright line tests that eliminate propositions for the judges to deliberate upon – as diagrammed in the coherence theory of Simon – are a way of avoiding undesirable use of intuition.
Yet, while deliberation may find errors in intuitive results in some cases, this is hardly a proof that intuition is necessarily worse.  Especially in view of the “no free lunch” theorem, it is to be concluded that intuition, which seems more like “neural nets” in artificial intelligence, may in some cases find correct answers where deliberation, which seems a bit more like A* in its exhaustiveness, makes errors. 
It would appear that this is just what the founding fathers intuited when they created separation of powers, that sometimes intuition or common sense would be necessary to solve certain problem, where statutes could not, and that well-selected legal decision-makers in the judicial context could exercise such intuition.

Conclusions: Judges and Bright Line Tests
            People looking at decision-making have confronted different styles in many domains.  Distinctions in styles appear in all the domains discussed above, to wit
-       Common law v. codification in European legal history;
-       Legislative v. judicial in the US constitution;
-       Deliberation v. intuition in legal decision-making;
-       Deterministic v. “good enough”/satisficing in the psychological study of Asperger’s Syndrome; and
-       Exhaustive logical v. black box mysterious in the field of artificial intelligence.
In each case, it becomes clear that people differ as to what style of decision-making they prefer.  The issues of: a) how clearly one understands why a decision is reached; and b) how predictable that decision will be, appears in each domain.  Predictability and logical explanation of decisions are not the be all and end all in any of these domains.  In fact, the search for total predictability and explainability is likely doomed to failure or excessive time used in many cases.
            The tendency of the CAFC, to want to formulate some kind of bright line test, some sort of analytical framework — and the concomitant rejection of intuitive, gut-feel type decisions — indicates a particular bias.  This bias, exalts the ability of a judge to foresee above the ability of the judge to decide. 
There is a decision, and there is a holding.  The holding is a rationalization, but it is not in fact the decision itself.  The decision itself is who wins and what, if any, remedy is applicable.  
            Exalting the dicta and holding, and their allegedly informative nature, in the mind of those who want to formulate bright line tests, will give predictability in the patent law, and reduce litigation.  These thinkers stray therefore into codification-type, deterministic thinking, and the desire to eliminate the need for decision-making by intuition/gut feel/black box/neural net type thinking.
            And, yet, there is no proof that “clarifying” the law by use of words/deliberation reduces litigation.  In fact, the laying down of simple, bright lines may only beg more questions.  The words are never complete.  There is always a way in which some lawyer can find an ambiguity, a pretext to go into court, a hole in the words – much as Gödel’s incompleteness theorem showed that mathematics is incomplete.  Moreover, expressing the law in oversimplified sound bytes can lead to unintended consequences – as the State Street Bank case with its oversimplified dicta that business methods were patentable has led to the Bilski case and the latter, in turn, might be interpreted to exempt from patentability subject matter previously not in question, such as the non-destructive testing of materials.
            There is something to be said about just looking at the intuition of each decision[lxi], the feeling of the judges in State Street Bank that this subject matter ought to be patentable and in Bilski that the subject matter ought not yet to be allowed to be patented, without necessarily looking at the reasoning – the feeling of a judge that one thing is patentable and another is not.  It is true that one person’s intuition may be different from another’s.  Two judges might come to different conclusions with respect to any given decision at hand – and indeed often do even with extensive legal reasoning, as cases are reversed on appeal.  The more intensely something is litigated, the more likely it is that reasonable men disagreed as to the proper outcome of that particular case. Yet someone must decide disputes. 
            The hope is that by vetting judges, using an elaborate Congressional procedure, one will find men and women who do have good intuition, as well as good reasoning, and that black box, intuitive decisions will have value, even if there is no good rationale for them.  Attempting legislation from the bench in the form of simplified sound byte type rationales is not the solution.
            This is not to say that deliberation and explanation of reasoning are never appropriate. What is not appropriate is the rejection of intuition and substituting for it sound-byte/bright line/oversimplified tests.
            The real problem with seeking out judicial decision-making in the present environment of patent law is the extreme expense involved.  The solution to that problem is not bright line tests, but legal reform to reduce the cost of patent litigation so that the opinions of carefully vetted judges are more affordable.




[i] The author is a 1983 graduate of Columbia Law School, who has been practicing patent law for 25 years.  The author would like to give thanks to J. David Schaffer, Ph.D., SUNY Binghamton for reviewing and making suggestions on this article.
[ii]  e.g. J. J. Barta, Jr. et al., “Supreme Court looks dimly on bright-line patent eligibility tests,” Missouri Lawyers Weekly, Vol. 24,  #28, July 12, 2010  http://www.armstrongteasdale.com/files/Uploads/Documents/Bilski%20Article%20for%20Website-8876889-1.PDF; A.J .Buffalino et al., The U.S. Supreme Court’s Ruling in Bilski v. Kappos: Hedging Against Bright-Line Rules” National Law Review (9/4/2010) http://www.natlawreview.com/article/us-supreme-court-s-ruling-bilski-v-kappos-hedging-against-bright-line-rules
[iii] T. Dyk, “Predictability in Patent Law,” 1/23/04, NYIPLA luncheon series
[iv] 28 U.S.C. 1295
[v] Keynote Address, 27th Annual Joint Patent Practice Seminar (New York Hilton, New York, NY 2011)
[vi] New York Intellectual Properly Law Association CLE Luncheon Program, (The Harvard Club, New York, NY, December 13, 2010)
[vii] 28 U.S.C. 1295
[viii]Article I, section 1 “All legislative Powers herein granted shall be vested in a Congress of the United States,” Article II, section 1, ” The executive Power shall be vested in a President of the United States of America.” and Article III, section 1 “The judicial Power of the United States, shall be vested in one supreme Court, and in such inferior Courts as the Congress may from time to time ordain and establish.http://www.senate.gov/civics/constitution_item/constitution.htm,  A full dissertation into the meaning of the separation of powers is beyond the scope of this paper.  The author will simply inelegantly wave her hands and say that it is her recollection that the judiciary was to decide cases, based on the facts; while the legislature was to make general pronouncements of statute based on policy and politics presented by numerous supplicants/lobbyists.
[ix] The similarity of the three cases discussed here was first brought to the attention of the author in Accenture/Pitney Bowes amicus curiae brief relating to the petition for certiorari in Bilski copy to be found at http://www.patentlyo.com/accentureamicus.pdf.  That brief also dealt with the case of  Quanta Computer, Inc. v. LG Elecs., Inc., 128 S. Ct.2109 (2008), but the present writer questions whether that case belongs in this group, seeing as the Supreme Court seems to have drawn a brighter line standard there than the Federal Circuit did..
[x], Festo Corp. v. Shoketsu Kinzoku Kogyo Kabushiki Co., 95-1066 , UNITED STATES COURT OF APPEALS FOR THE FEDERAL CIRCUIT, 234 F.3d 558; 2000 U.S. App. LEXIS 29979; 56 U.S.P.Q.2D (BNA) 1865, November 29, 2000, Decided ,  vacated and remanded 535 U.S. 722 (2002).
[xi] Teleflex, Inc. v. KSR Int'l Co., 04-1152 , UNITED STATES COURT OF APPEALS FOR THE FEDERAL CIRCUIT, 119 Fed. Appx. 282; 2005 U.S. App. LEXIS 176, January 6, 2005, Decided ,  THIS DECISION WAS ISSUED AS UNPUBLISHED OR NONPRECEDENTIAL AND MAY NOT BE CITED AS PRECEDENT. PLEASE REFER TO THE RULES OF THE FEDERAL CIRCUIT COURT OF APPEALS FOR RULES GOVERNING CITATION TO UNPUBLISHED OR NONPRECEDENTIAL OPINIONS OR ORDERS. , Later proceeding at KSR Int'l Co. v. Teleflex, Inc., 546 U.S. 808, 126 S. Ct. 327, 163 L. Ed. 2d 41, 2005 U.S. LEXIS 5490 (2005)US Supreme Court certiorari granted by, Motion granted by KSR Int'l Co. v. Teleflex, Inc., 126 S. Ct. 2965, 165 L. Ed. 2d 949, 2006 U.S. LEXIS 4912 (U.S., 2006)Reversed by, Remanded by KSR Int'l Co. v. Teleflex Inc., 127 S. Ct. 1727, 167 L. Ed. 2d 705, 2007 U.S. LEXIS 4745 (U.S., 2007)
[xii] In re Bilski, 2007-1130, UNITED STATES COURT OF APPEALS FOR THE FEDERAL CIRCUIT, 545 F.3d 943; 2008 U.S. App. LEXIS 22479; 2008-2 U.S. Tax Cas. (CCH) P50,621, October 30, 2008, Decided, US Supreme Court certiorari granted by Bilski v. Doll, 2009 U.S. LEXIS 4103 (U.S., June 1, 2009).   The author filed an amica curiae brief for the successful petition for certiorari
When the patentee responds to the rejection by narrowing his claims, this prosecution history estops him from later arguing that the subject matter covered by the original, broader claim was nothing more than an equivalent. 535 US at 722, see also  533 US at 733, section III of the opinion

“Prosecution” is what patent professionals call the administrative law process by which patents are obtained in the USPTO .see e.g. MPEP 707.07(j) I where the term is used casually by the United States Patent and Trademark Office as if everyone would automatically know what it means.
[xiv]  234 F. 3d at 569
            “Doctrine of Equivalents” is an equitable type doctrine that broadens patent scope during litigation to embodiments considered “equivalent” to the claimed invention, rather than restricting the patentee to the literal language of the claims . see e.g. Warner-Jenkinson Co.v. Hilton Davis Chemical, 520 U.S. 17, 117 S.Ct. 1040 (1997), Graver Mfg. Co. v. Linde Co., 339 U.S. 605 (1950), Winans v. Denmead, 15 How. 330, 347 (1854)
Unfortunately, the nature of language makes it impossible to capture the essence of a thing in a patent application. The inventor who chooses to patent an invention and disclose it to the public, rather than exploit it in secret, bears the risk that others will devote their efforts toward exploiting the limits of the patent's language 535 US at 731
[xv] Festo Corp. v. Shoketsu Kinzoku Kogyokabushiki Co., 535 U.S. 722, 122 S.Ct. 1831 (2002)
[xvi] The word “reference” is commonly used in the field of patent law to refer to documents considered in determinations of obviousness and anticipation.
[xvii] In re Sang-Su Lee, 277 F.3d 1338 (Fed. Cir. 2002)
[xviii] This is the term used to describe agents and attorneys who engage in patent prosecution.
[xix] The author personally liked this test, also, because of the nature of the information explosion.  http://www.slais.ubc.ca/COURSES/libr500/03-04-wt2/www/K_Woods/vol1.htm  Due to this currently worsening situation, combining references is actually much more difficult for inventors than one might suppose merely by making a keyword search of related art – and much more difficult than it used to be.  Highly qualified inventors must spend a great deal of time and money testing many alternatives, prior to choosing one.  Employers have to hire these expensive employees for long periods of time to process the information and make tests.  Employers and inventors need protection for their investment, particularly in the current economic climate.  Prior art searches are inherently based on hindsight, because they use the claims formulated in a patent document as keys to conducting searches.  Such searches would not necessarily be possible absent the document. Hindsight often makes combinations look obvious, even though they require considerable time and energy on the part of expensive, highly educated employees – or struggling individual inventors.  Courts and patent examiners, sitting in isolation from the development environment, simply fail to see what investment is involved – and often that information is not in the record.
[xx] slip opinion at pp. 10-11
[xxi] slip opinion at p. 15
[xxii] slip opinion at p. 17.  It may be argued here, though, that the Supreme Court over interpreted what the Federal Circuit said.  The Federal Circuit never said that common sense could not be one of the findings, only that the district court had failed to make specific findings.   Indeed, the Supreme Court recognized that the Federal Circuit later acknowledged common sense. 


[xxiii]

American Law Institute, Restatement of the Law Second: Torts 2d, §283, Ch. 12, p. 12 et seq.
[xxiv] In re Bilski, 545 F.3d 943 (Fed. Cir. 2008) cert. granted Bilski v. Doll, 08-964 (U.S. 6-1-2009)
[xxv] State St. Bank & Trust Co. v. Signature Fin. Group, 96-1327 , 149 F.3d 1368; 1998 U.S. App. LEXIS 16869; 47 U.S.P.Q.2D (BNA) 1596, (Fed. Cir 1998) ,  cert. den. 1999 U.S. LEXIS 493 (1999)
[xxvi] Broad dicta  of the sort that appeared in this case seem to follow in the line of thinking espoused by Judge Dyk in his January 2004 presentation, that somehow the court ought to be giving broad guidance to the public – and that such guidance will reduce future litigation.
[xxvii] 35 U.S.C. 273
[xxviii] Philips amicus brief.
[xxix] Bilski v. Kappos, 510 U.S.____, 130 S. Ct. 3218, 177 L. Ed. 2d 792 (2010)
[xxx] There is some humor to be taken from this opinion.  Conservative justices, who tend to be more pro-patent than liberal ones, took the point of view that Congress intended the courts to interpret the scope of patentable subject matter in light of changing circumstances.  Liberal justices, who tend to be more anti-patent than conservative ones, took the point of view that the scope of patentable subject matter ought to be taken from the original intent of the framers of the constitution – despite the obvious fact that much subject matter for which patents are currently sought could not even have been imagined by the framers of the constitution.  These positions, from the point of view of legal method, are exactly opposite to the positions that those same judges might have been expected to take in some of the edgier civil rights cases.
[xxxi] a great deal of information has been published about Asperger’s Syndrome in recent years.  Some useful books on the topic include: T. Attwood, The Complete Guide to Asperger’s Syndrome (Jessica Kingsley Publishers 2007), and P. R. Bashe et al, The OASIS Guide to Asperger Syndrome: Advice Support, Insight, and Inspiration (Crown Publishers 2001).
[xxxii] http://en.wikipedia.org/wiki/People_speculated_to_have_been_autistic
[xxxiii] S. Guttstein, “Going to the Heart of Autism: The Relationship Development Intervention Program” DVD (www.rdiconnect.com 2005)
[xxxiv] Before the reader starts going out diagnosing people, it should be made clear that this type of thinking deficit is only one symptom of an autistic spectrum disorder.  Others must be present in order for the disorder to be diagnosed.  People without an autistic spectrum disorder may nevertheless have some autistic features like this one.  More information about psychiatric diagnoses is to be found in Diagnostic and Statistical Manual of Mental Disorders IV (DSM IV) published by the American Psychiatric Association
[xxxv] see e.g. http://en.wikipedia.org/wiki/Satisficing .  Some writings including discussion of this topic include Simon, H. A. (1957). Models of man: Social and rational. New York: Wiley; Simon, H. A. (1978). Rationality as a process and product of thought. American Economic Review, 68, 1-16; Simon, H. A. (1983). Reason in human affairs. Stanford: Stanford University Press
[xxxvi] Jacobellis v. Ohio, 378 U.S. 184, 197 (1964).
[xxxvii] Hotchkiss v. Greenwood, 52 U.S. 11 How. 248 (1850)
[xxxviii]  more or less abandoned, see e.g. Graham v. John Deere Co., 383 U.S. 1, 15 (1966)
[xxxix] N. Branan, “Unconscious Decisions: As we mull a choice, our subconscious decides for us,” Scientific American Mind, August, 2008
[xl] with reference to the following case books H.W. Jones et al, Legal Method: Cases and Materials, (The Foundation Press, Inc. 1980 Mineola, NY) and Berman, Materials on Comparative Law (privately printed for the exclusive use of students at the Columbia University School of Law, not for publication Spring 1981)
[xli] In some cases, it does not seem that any of the statements of the court are holding, since none of them stand up to scrutiny as justifying the decision in light of the facts.  One such case is Gottshalk v. Benson, as pointed out by the author in her amicus curiae brief in the Bilski case.  If a later court comes to the conclusion that none of the reasoning in an earlier opinion makes sense, a case might be “restricted to its facts.”
[xlii]  See e.g. N. Millich, “Building Blocks of Analysis: Using Simple “Sesame Street Skills” and Sophisticated Educational Learning Theories in Teaching a Seminar in Legal Analysis and writing”, 27 Santa Clara L. Rev. 1127, (1993-1994), (This article introduces a typical law school problem/exam question and also reviews in more detail what Legal Method is like and how to teach it.)
[xliii] see e.g. http://www.nvcc.edu/home/lshulman/Rel232/lectures/judaism/beliefs.htm
[xliv] Lao-tzu, Tao Te Ching (D.C. Lau trans., Penguin Books 1963)
[xlv] Artificial intelligence is a field that bears some striking similarities to law, especially legal drafting.  In artificial intelligence, the practitioner seeks to induce a machine to make decisions that satisfy some criterion.  In legal drafting, the practitioner seeks to induce a judge to make decisions that satisfy some criterion. 
Legal drafting might include generation of instruments such as agreements, wills, deeds, or patents.  The practitioner uses a combination of past experience, legal knowledge, legal forms, and client input to attempt to craft an instrument that will anticipate all possible scenarios and give a predictable result that is as favorable to the client as the client’s factual circumstances will allow. 
[xlvi] One paper that has previously considered genetic algorithms and neural nets as compared with legal reasoning is G. L. Blasi, “What Lawyers Know, Cognitive Science, and the Functions of Theory,” 45 J. Legal Educ. 313, 374-378 (1995)
[xlvii] Fig. 27 of Dorst et al.
[xlviii] Fig. 28 of Dorst et al.
[xlix] Fig. 2a is Fig. 3a of Schaffer et al.
[l] Fig 3b of Schaffer et al.
[li] Fig.3c of Schaffer et al.
[lii] http://en.wikipedia.org/wiki/Survival_of_the_fittest
[liii] see e.g.  L. Davis (ed.), Handbook of Genetic Algorithms, Van Nostrand Reinhold, 1991
[liv] Fig. 3 is Fig. 1 of Schaffer et al. The operations performed by the nodes are commonly multiplications by constant coefficients and summation over a node’s inputs, US patent 6601053
[lv] And indeed one of the author’s family members diagnosed with Asperger’s Syndrome has had a very difficult time with such generalizations, so this process of indoctrination clearly does not always work even in  humans.
[lvi] Rumelhart, Mclelland, the PDP Research group, Parallel distributed processing: explorations in the Microstructure of Cognition Volume 1: Foundations, MIT Press, 1986.
[lvii] R. L. Holtz, “A Wandering Mind Heads Straight Toward Insight – Researchers Map the Anatomy of the Brain’s Breakthrough Moments and Reveal the Payoff of Daydreaming,” Wall Street Journal (Eastern Edition) New York NY 6/19/09 p. A-11; J. Glausiusz et al., “Devoted to Distraction,” Psychology Today, Mar/Apr 2009, 42, 2; Platinum Periodicals, p. 84
[lviii] see e.g. http://en.wikipedia.org/wiki/No_free_lunch_theorem, version of 18 May 2009 at 22:20
[lix] D. Simon, “A Third View of the Black Box: Cognitive Coherence in Legal Decision Making,” 71 U. Chi. L. Rev. 511 (2004), D. Simon, “Freedom from Constraint in Adjudication a Look Through the Lens of Cognitive Psychology,” 67 Brook. L. Rev. 1097 (2001-2002); D. Simon, “A Psychological Model of Decision-Making,” 30 Rutgers L. J. 1 (1998-1999)
[lx] C. Guthrie et al., “Blinking on the Bench: How Judges Decide Cases,” 93 Cornell L. Rev 1 (2007-2008); D. M. Kahan, ‘“Ideology in” or “Cultural Cognition of” Judging: What Difference Does It Make?,’ 92 Marq. L. Rev. 413 (2008-2009)
[lxi]  The law school idea that reasoning may be able to be found to support a broad array of desired conclusions comes in to play here.  In a hotly contested en banc case, especially with briefs of amici, many avenues of reasoning will be offered so that some reasonable-sounding opinion will generally be able to be written.