Accountics Scientists Failing to Communicate on the AAA Commons 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn ."

Bob Jensen at Trinity University 

Accountics Scientists on the AAA Commons

Accountics Scientists Should Take Note
Alan Alda Uses Improv to Teach Scientists How to Communicate Their Ideas
---
http://www.openculture.com/2015/03/alan-alda-uses-improv-to-teach-scientists-how-to-communicate-their-ideas.html

Jensen Comment
Accountics scientists are very poor communicators of their research. Most of their sessions at annual meetings only need about ten chairs. One time I was on a panel with Bill Cooper and Yuji Ijiri. Only the panel showed up at the session.

One time when an accountics scientist received a Notable Contributions to the Literature Award at an American Accounting Association, he said that in a previous year when he presented his research at an AAA meeting the only people who showed up came to borrow chairs for the session next door.

Not one accountics scientist has a blog. Nor do they tell about their research on listservs or at Websites.

Some professors of mathematics, statistics, and econometrics have great blogs. Why doesn't a single accountics scientist have a blog?

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


"Richard Feynman on the Universal Responsibility of Scientists," by Maria Popover, Brain Pickings, March 6, 2013 ---
http://www.brainpickings.org/index.php/2013/03/06/richard-feynman-responsibility-of-scientists/

. . .

It is our responsibility as scientists, knowing the great progress and great value of a satisfactory philosophy of ignorance, the great progress that is the fruit of freedom of thought, to proclaim the value of this freedom, to teach how doubt is not to be feared but welcomed and discussed, and to demand this freedom as our duty to all coming generations.

Jensen Comment
Are accountics scientists living up to their responsibilities?
 

In many ways they are living up to their responsibilities, but in some ways they are failing badly relative to the real scientists, especially the "welcomed and discussed part."
See below,

Introduction (including a scrapbook download on the past, present, and future of accountics science)

Major Problems in Accounting Doctoral Programs

Major problems in accountics science

Accountics Scientists Fail to Communicate on the AAA Commons  

Developmental Research and the Pathways Commission Initiatives

Essays on the State of Accounting Scholarship  

The Myth of Rigorous Accounting Research

The Cargo Cult by Sudipta Basu

The Sad State of Economic Theory and Research 

Academic Accounting Inventors Are Rare

Robustness Issues

Real-Science versus Pseudo-Science

Gasp! How could an accountics scientist question such things? This is sacrilege!

A Scrapbook on What's Wrong with the Past, Present and Future of Accountics Science

An enormous problem with accountics science is that this science largely confines itself to databases where it's only possible to establish correlations and not causes, because zero causal information is contained in the big databases they purchase rather than collect themselves ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf 

2012 "Final" Pathways Commission Report ---
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf
Also see a summary at
"Accounting for Innovation," by Elise Young, Inside Higher Ed, July 31, 2012 ---
http://www.insidehighered.com/news/2012/07/31/updating-accounting-curriculums-expanding-and-diversifying-field

Granulation
Obviously correlation is not causation, but don't suggest this too loudly to referees of The Accounting Review ---
An enormous problem with accountics science, and finance in general,  is that these sciences largely confine themselves to databases where it's only possible to establish correlations and not causes, because zero causal information is contained in the big databases they purchase rather than collect themselves ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf 

Academic Versus Political Reporting of Research:  Percentage Columns Versus Per Capita Columns ---
http://www.cs.trinity.edu/~rjensen/temp/TaxAirlineSeatCase.htm
by Bob Jensen, April 3, 201

Now that the AAA Commons has been formed Rhett, you've got an opportunity to explain your AAA journal research publication successes to interested  accounting teachers, researchers, and practitioners who visit the Commons.
Accounting Teacher Scarlet O'Hara

Frankly, Scarlett, after I get a hit for my resume in an AAA journal I just don't give a damn.
Accountics Scientist Rhett Butler

Accountics is the mathematical science of values.
Charles Sprague [1887] as quoted by McMillan [1998, p. 1]

http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm 

 

The AAA's Pathways Commission Accounting Education Initiatives Make National News
Accountics Scientists Should Especially Note the First Two Recommendations


David Johnstone asked me to write a paper on the following:
"A Scrapbook on What's Wrong with the Past, Present and Future of Accountics Science"
Bob Jensen
February 19, 2014
SSRN Download:  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2398296 

Abstract

For operational convenience I define accountics science as research that features equations and/or statistical inference. Historically, there was a heated debate in the 1920s as to whether the main research journal of academic accounting, The Accounting Review (TAR) that commenced in 1926, should be an accountics journal with articles that mostly featured equations. Practitioners and teachers of college accounting won that debate.

TAR articles and accountancy doctoral dissertations prior to the 1970s seldom had equations.  For reasons summarized below, doctoral programs and TAR evolved to where in the 1990s there where having equations became virtually a necessary condition for a doctoral dissertation and acceptance of a TAR article. Qualitative normative and case method methodologies disappeared from doctoral programs.

What’s really meant by “featured equations” in doctoral programs is merely symbolic of the fact that North American accounting doctoral programs pushed out most of the accounting to make way for econometrics and statistics that are now keys to the kingdom for promotion and tenure in accounting schools ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

The purpose of this paper is to make a case that the accountics science monopoly of our doctoral programs and published research is seriously flawed, especially its lack of concern about replication and focus on simplified artificial worlds that differ too much from reality to creatively discover findings of greater relevance to teachers of accounting and practitioners of accounting. Accountics scientists themselves became a Cargo Cult.

 

 


"Accounting for Innovation," by Elise Young, Inside Higher Ed, July 31, 2012 ---
http://www.insidehighered.com/news/2012/07/31/updating-accounting-curriculums-expanding-and-diversifying-field

Accounting programs should promote curricular flexibility to capture a new generation of students who are more technologically savvy, less patient with traditional teaching methods, and more wary of the career opportunities in accounting, according to a report released today by the Pathways Commission, which studies the future of higher education for accounting.

In 2008, the U.S. Treasury Department's  Advisory Committee on the Auditing Profession recommended that the American Accounting Association and the American Institute of Certified Public Accountants form a commission to study the future structure and content of accounting education, and the Pathways Commission was formed to fulfill this recommendation and establish a national higher education strategy for accounting.

In the report, the commission acknowledges that some sporadic changes have been adopted, but it seeks to put in place a structure for much more regular and ambitious changes.

The report includes seven recommendations:

According to the report, its two sponsoring organizations -- the American Accounting Association and the American Institute of Certified Public Accountants -- will support the effort to carry out the report's recommendations, and they are finalizing a strategy for conducting this effort.

Hsihui Chang, a professor and head of Drexel University’s accounting department, said colleges must prepare students for the accounting field by encouraging three qualities: integrity, analytical skills and a global viewpoint.

“You need to look at things in a global scope,” he said. “One thing we’re always thinking about is how can we attract students from diverse groups?” Chang said the department’s faculty comprises members from several different countries, and the university also has four student organizations dedicated to accounting -- including one for Asian students and one for Hispanic students.

He said the university hosts guest speakers and accounting career days to provide information to prospective accounting students about career options: “They find out, ‘Hey, this seems to be quite exciting.’ ”

Jimmy Ye, a professor and chair of the accounting department at Baruch College of the City University of New York, wrote in an email to Inside Higher Ed that his department is already fulfilling some of the report’s recommendations by inviting professionals from accounting firms into classrooms and bringing in research staff from accounting firms to interact with faculty members and Ph.D. students.

Ye also said the AICPA should collect and analyze supply and demand trends in the accounting profession -- but not just in the short term. “Higher education does not just train students for getting their first jobs,” he wrote. “I would like to see some study on the career tracks of college accounting graduates.”

Mohamed Hussein, a professor and head of the accounting department at the University of Connecticut, also offered ways for the commission to expand its recommendations. He said the recommendations can’t be fully put into practice with the current structure of accounting education.

“There are two parts to this: one part is being able to have an innovative curriculum that will include changes in technology, changes in the economics of the firm, including risk, international issues and regulation,” he said. “And the other part is making sure that the students will take advantage of all this innovation.”

The university offers courses on some of these issues as electives, but it can’t fit all of the information in those courses into the major’s required courses, he said.

Continued in article

Bob Jensen's threads on Higher Education Controversies and Need for Change ---
http://www.trinity.edu/rjensen/HigherEdControversies.htm

The sad state of accountancy doctoral programs ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms


Hi Pat,

I had a phone call asking what I meant by "Cargo Cult" ---
 Sudipta Basu picked up on the Cargo Cult analogy to stagnation of accountics science research over the past few decades ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


If you carefully read those four essays in the latest Accounting Horizons (December 2012) you would find that I go pretty easy on accountics researchers compared to other accounting researchers who find accountics research "stagnating" and "lacking in innovation" ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

I'm only a minuscule part of the accounting teachers, practitioners, and researchers who are fed up with accountics science dominance of doctoral programs and top journals.

I don't disagree that there should be curriculum tracks in business schools for the use of information in decision making, including financial statement analysis, balanced scorecard, etc.

But you and Richard seem to be clinging to silo courses that the Pathways Commission is trying to overcome.

Do you think he's correct that accountics scientists teaching managerial accounting in MBA courses and teachers of financial statement analysis should have zero interest in bridging silos? Richard in particular seems to be defending silos.

Have you, Pat, read the Pathways Commission Report?
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf

The Report gives great attention to what it describes as "impediments" to curriculum development in business schools, particularly those silos that you and Richard seem to be clinging to in this thread. Beginning on Page 14 the Report states the following:
 

Impediments exist at institutional, program/department, and individual levels. Among the most significant impediments are
 

1) failure to acknowledge what drives faculty to change,
 

2) inability to overcome the silo effect in many departments where curricula are viewed simply as collections of independent courses,


 

3) delays in incorporating effective practices in pedagogy because faculty lack experience, knowledge, and development opportunities,
 

4) the slow pace at which curricular change occurs within colleges and universities,
 

5) lack of flexibility in tenure processes and post- tenure review focused primarily on research productivity, 6) lack of reward structures promoting student-centeredness and curricular innovation,
 

7) inability or unwillingness of deans and department chairs to implement change, and
 

8) lack of appreciation or understanding of the importance of sound pedagogy and professional relevance. Without the commitment of accounting educators and practice colleagues to challenge the institutional cultures and structures at the root of these impediments, sustaining renewal and innovation in accounting education will continue to be limited. The Commission recommends actions to address these impediments, outlined in more detail in the full report.

 

I think that Richard is more concerned that accountics scientists should have zero interest in the following quotation from Page 51 of that Report:

 

In a 1995 Accounting Horizons commentary, “Accounting Research: On the Relevance of Research to Practice,” James Leisenring and L. Todd Johnson of the Financial Accounting Standards Board (FASB) wrote about a void between the research in top academic journals that emphasize methodological rigor and professional journals that favor businesslike articles. Leisenring and Johnson believe there is an audience for the research that could fill this void: “That audience would like to see those professors get to work on doing the types of analysis that would provide insights relevant to practice. And it would like to see them communicate those insights in articles that would be understandable to practitioners. That would be useful.”

This void still exists today. Both producers and consumers will have to participate and contribute to this endeavor. Collaboration among academic researchers and practitioners is not a new idea, but it is not common in currently published research; therefore, opportunities are missed to increase the flow of useful, quality research. Greater collaboration between academic researchers, who are skilled and experienced in conducting and validating research, and professional practitioners, who have current insights into leading and emerging practices, will help to increase the level of useful, quality research that will help to advance the profession.

I think you and Richard in particular overlook the part about: "producers and consumers will have to participate and contribute to this endeavor."

 An enormous problem with accountics science is that this science largely confines itself to databases where it's only possible to establish correlations and not causes, because zero causal information is contained in the big databases they purchase rather than collect themselves ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf 
 

The Pathways Commission would like accountics researchers to become more involved at the practice level of both generating and using accounting information --- that is when accountics researchers would break free of the Cargo Cult.

Respectfully,
Bob Jensen
 

Hi Denny,

Actually this one I did catch in my morning newsletter from the AICPA. But I had not yet made a tidbit out of it.

Having greater access to data in practitioner audit firms should give significant traction to the initiatives of the Pathways Commission Report calling for greater interaction between academic accounting researchers and the practicing profession. Academics might even begin to make more significant contributions to the needs of the profession.

Thanks,
Bob Jensen

"CAQ, AAA team to give researchers access to audit firm personnel," by Ken Tisiac, Journal of Accountancy, January 2013 ---
http://journalofaccountancy.com/News/20137198.htm

A new program announced Thursday by the Center for Audit Quality (CAQ) and the Auditing Section of the American Accounting Association (AAA) will help accounting and auditing academics gain access to audit firm personnel to participate in academic research projects.

The joint venture between the CAQ and AAA Auditing Section is designed to help generate research on issues that are relevant to audit practice.

Doctoral students and tenure-track professors are the initial group to be provided access to audit firm staff to complete data collection protocols through the Access to Audit Personnel program.

“We hope to encourage scholars to focus their research and teaching in auditing, which is critical to the sustainability of the profession,” CAQ Executive Director Cindy Fornelli said in a statement. “We thank our member firms for opening their doors to the next generation of accounting and auditing professors.”

The CAQ is affiliated with the AICPA.

Firms that are CAQ Governing Board members have agreed to participate in the program, which requires doctoral students and tenure-track professors to submit a request for proposal (RFP) to a committee of senior academics and audit practitioners.

The RFP will require the researchers to provide a detailed description of their research, methodology, and how the research will fit into the existing literature. The full criteria for the RFP are available on the CAQ’s website.

A total of five proposals will be approved this year by the committee in what will be an annual program, and the requests will be forwarded to the firms, which have pledged to cooperate. The deadline for RFPs to be submitted is April 22.

The program is designed to break down a barrier to relevant research that has existed in accounting and auditing for years. One objective for the profession described in the Pathways Commission report that charts a national strategy for the next generation of accountants was to focus more academic research on relevant practice issues. The report said greater collaboration between academic researchers and professional practitioners is needed.

In auditing research, that collaboration should increase as a result of this project.

“It provides access to auditors, people actually practicing auditing, to help us find and answer questions that can be helpful to them,” said Roger Martin, president of the AAA Auditing Section. “Often, if we can’t find auditors to help with this research, we end up using students as participants, or other proxies for auditors. And that’s never very satisfying. It’s helpful, but not as good as getting access to those people doing the things we want to research.”

Martin said firms are accustomed to working with established, veteran researchers, who use their professional contacts to gain access to appropriate auditing personnel. But many younger researchers haven’t yet developed those contacts, and they are the focus of the new program

Continued in article

 

Major Problems in Accounting Doctoral Programs

The Sad State of North American Ph.D. Programs

Accounting at a Tipping Point (Slide Show)
Former AAA President Sue Haka
April 18, 2009
http://commons.aaahq.org/files/20bbec721b/Midwest_region_meeting_slides-04-17-09.pptm

Bob Jensen's threads on the Sad State of North American Ph.D. Programs ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms


The limits of mathematical and statistical analysis of big data
From the CFO Journal's Morning Ledger on April 18, 2014

The limits of social engineering
Writing in
 MIT Technology Review, tech reporter Nicholas Carr pulls from a new book by one of MIT’s noted data scientists to explain why he thinks Big Data has its limits, especially when applied to understanding society. Alex ‘Sandy’ Pentland, in his book “Social Physics: How Good Ideas Spread – The Lessons from a New Science,” sees a mathematical modeling of society made possible by new technologies and sensors and Big Data processing power. Once data measurement confirms “the innate tractability of human beings,” scientists may be able to develop models to predict a person’s behavior. Mr. Carr sees overreach on the part of Mr. Pentland. “Politics is messy because society is messy, not the other way around,” Mr. Carr writes, and any statistical model likely to come from such research would ignore the history, politics, class and messy parts associated with humanity. “What big data can’t account for is what’s most unpredictable, and most interesting, about us,” he concludes.

Jensen Comment
The sad state of accountancy and many doctoral programs in the 21st Century is that virtually all of them in North America only teach the methodology and technique of analyzing big data with statistical tools or the analytical modeling of artificial worlds based on dubious assumptions to simplify reality ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

The Pathways Commission sponsored by the American Accounting Association strongly proposes adding non-quantitative alternatives to doctoral programs but I see zero evidence of any progress in that direction. The main problem is that it's just much easier to avoid having to collect data by beating purchased databases with econometric sticks until something, usually an irrelevant something, falls out of the big data piñata.

"A Scrapbook on What's Wrong with the Past, Present and Future of Accountics Science"
Bob Jensen Jensen
February 19, 2014
SSRN Download:  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2398296 

 


A paper can become highly cited because it is good science – or because it is eye-catching, provocative or wrong. Luxury-journal editors know this, so they accept papers that will make waves because they explore sexy subjects or make challenging claims. This influences the science that scientists do. It builds bubbles in fashionable fields where researchers can make the bold claims these journals want, while discouraging other important work, such as replication studies.

"How journals like Nature, Cell and Science are damaging science:  The incentives offered by top journals distort science, just as big bonuses distort banking," Randy Schekman, The Guardian, December 9, 2013 ---
http://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science

I am a scientist. Mine is a professional world that achieves great things for humanity. But it is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best. Those of us who follow these incentives are being entirely rational – I have followed them myself – but we do not always best serve our profession's interests, let alone those of humanity and society.

e all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals – chiefly Nature, Cell and Science.

These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals' reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called "impact factor" – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.

It is common, and encouraged by many journals, for research to be judged by the impact factor of the journal that publishes it. But as a journal's score is an average, it says little about the quality of any individual piece of research. What is more, citation is sometimes, but not always, linked to quality. A paper can become highly cited because it is good science – or because it is eye-catching, provocative or wrong. Luxury-journal editors know this, so they accept papers that will make waves because they explore sexy subjects or make challenging claims. This influences the science that scientists do. It builds bubbles in fashionable fields where researchers can make the bold claims these journals want, while discouraging other important work, such as replication studies.

In extreme cases, the lure of the luxury journal can encourage the cutting of corners, and contribute to the escalating number of papers that are retracted as flawed or fraudulent. Science alone has recently retracted high-profile papers reporting cloned human embryos, links between littering and violence, and the genetic profiles of centenarians. Perhaps worse, it has not retracted claims that a microbe is able to use arsenic in its DNA instead of phosphorus, despite overwhelming scientific criticism.

There is a better way, through the new breed of open-access journals that are free for anybody to read, and have no expensive subscriptions to promote. Born on the web, they can accept all papers that meet quality standards, with no artificial caps. Many are edited by working scientists, who can assess the worth of papers without regard for citations. As I know from my editorship of eLife, an open access journal funded by the Wellcome Trust, the Howard Hughes Medical Institute and the Max Planck Society, they are publishing world-class science every week.

Funders and universities, too, have a role to play. They must tell the committees that decide on grants and positions not to judge papers by where they are published. It is the quality of the science, not the journal's brand, that matters. Most importantly of all, we scientists need to take action. Like many successful researchers, I have published in the big brands, including the papers that won me the Nobel prize for medicine, which I will be honoured to collect tomorrow.. But no longer. I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.

Coninued in article

Bob Jensen's threads on how prestigious journals in academic accounting research have badly damaged academic accounting research, especially in the accountics science takeover of doctoral programs where dissertation research no longer is accepted unless it features equations ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

Lack or Replication in Accountics Science:
574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

 


Accounting Students Dropping Out of Accountics Science Doctoral Programs

September 5, 2012 message from Professor XXXXX

Hello Bob:

Thought you might find the following interesting.

I had a student as an undergrad that I encouraged to get a PhD.  She went out and worked a few years and came back to get a master’s degree.  During the master’s degree she decided what she really wanted was to be a professor and applied for a PhD degree at Tennessee.  Got accepted and went without finishing her masters.  One year later I found her in my master’s class.  She was so fed up with the total emphasis on what you have been calling accountics that she dropped out and came back to finish her masters.  I have been trying to convince her that it was just Tennessee and she really did want to become a professor but it has been an uphill battle and I would say at this point highly unlikely that she will ever again consider being a professor.

XXXXX

September 5, reply from Bob Jensen

Dear Professor XXXXX,

It's not just Tennessee. Virtually all accounting doctoral programs in AACSB accredited universities have literally been taken over by accountics science researchers --- http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

This is one of the main reason the 2012 Pathways Commission Report appeals for alternative tracks to be commenced in accountancy doctoral programs.

I had a similar (actually brilliant) student who did complete his masters degree in accounting at Trinity University. He was a joint accounting and mathematics  undergraduate major. He was admitted to the University of Texas doctoral program and dropped out for the same reason you mention above --- too much accountics and too little accounting even though he was doing well in his accountics science courses. He just was more interested in accounting than accountics.

Thanks,
Bob Jensen

September 5, 2012 reply from Professor YYYYY

We had a retired Marine who completed our BS and MS program. He was an extremely good student, so much so that we hired him as a lecturer when he completed his master's degree. After a few years he decided at age 45 to go get his PhD. He looked at several programs (RRRRR SSSSS, and TTTTT because they are all about 3-4 hours away and his family was not going to move with him).

He finally decided on TTTTT. TTTTT is a good school, but definitely not elite or one of the Top 10 accountics science programs. I felt that he could get a decent doctoral program there. He hated it despite doing well in the classes. He was frustrated at reading nothing but mathematics and statistical papers that had nothing to do with what he wanted to pursue; teaching and professional research. He mentioned this in class on several occasions and was basically told that real accounting professors were not interested in teaching. The PhD was a research degree and as such you would not be learning how to teach, it was assumed that you knew enough of that when admitted to the program and that your real goal should be to get placed at a school where teaching would not interfere with research. On several occasions the students took him aside and said to be careful about being out spoken in class regarding teaching and professional research. If he continued to mention those things the faculty would not be amenable to working with him on research projects or help him get through. I can remember back to my doctoral program in the late 90's and we did the same thing. To get along with the faculty you never expressed a desire to teach, it was all about research. Among your fellow students you could be open about desires to teach, but not faculty. I can remember several faculty members during my job search admonishing me for the schools where I was interviewing because they were "teaching schools" and beneath their desires for where grads of our schools should be applying. (this part of the message was deleted by Jensen)

What's more, when the doctoral student in question asked his adviser about application of the research to the profession the adviser was flummoxed by what he meant. It was not his job to apply his research to the profession but rather the profession to find what it needed if they wanted to. He said that personally he didn't feel any need to try to better the profession and that his profession was not accounting but academia. Needless to say he was discouraged and left TTTTT before the end of his first semester. He passed all four parts of his CPA exam (all >90) and is now working at one of the larger local CPA firms doing quite well.

Thought you might like to her another anecdotal report on what's going on in the ivory towers. I really enjoy having Steve Kachelmeyer on the listserv and the debates that go on because of his willingness to interact. I know he brings a very different perspective from the majority of us on the list who are not big name researchers.

Hope all is going well with you and Erika. The weather in Texas is miserably hot, not as abd as last year, but still hot. Forecast is for 105 tomorrow, September 6th!

 

The Sad State of Accounting Doctoral Programs in North America

"Exploring Accounting Doctoral Program Decline:  Variation and the Search for Antecedents," by Timothy J. Fogarty and Anthony D. Holder, Issues in Accounting Education, May 2012 ---
Not yet posted on June 18, 2012

ABSTRACT
The inadequate supply of new terminally qualified accounting faculty poses a great concern for many accounting faculty and administrators. Although the general downward trajectory has been well observed, more specific information would offer potential insights about causes and continuation. This paper examines change in accounting doctoral student production in the U.S. since 1989 through the use of five-year moving verges. Aggregated on this basis, the downward movement predominates, notwithstanding the schools that began new programs or increased doctoral student production during this time. The results show that larger declines occurred for middle prestige schools, for larger universities, and for public schools. Schools that periodically successfully compete in M.B.A.. program rankings also more likely have diminished in size. of their accounting Ph.D. programs. Despite a recent increase in graduations, data on the population of current doctoral students suggest the continuation of the problems associated with the supply and demand imbalance that exists in this sector of the U.S. academy.

September 5, 2012 reply from Dan Stone

This is very sad and very true.

Tim Fogarthy talks about the "ghettoization" of accounting education in some of his work and talks. The message that faculty get, and give, is that if a project has no chance for publication in a top X journal, then it is a waste of time. Not many schools are able to stand their ground, and value accounting education, in the face of its absence in any of the "top" accounting journals.

The paradox and irony is that accounting faculty devalue and degrade the very thing that most of them spend the most time doing. We seem to follow a variant of Woody Allen's maxim, "I would never join a club that would have me as a member." Here, it is, "I would never accept a paper for publication that concerns what I do with most of my time."

As Pogo said, "we have met the enemy and they is us."

Dan Stone

Jensen Comment
This is a useful update on the doctoral program shortages relative to demand for new tenure-track faculty in North American universities. However, it does not suggest any reasons or remedies for this phenomenon.  The accounting doctoral program in many ways defies laws of supply and demand. Accounting faculty are the among the highest paid faculty in rank (except possibly in unionized colleges and universities that are not wage competitive). For suggested causes and remedies of this problem see --- See Below!

Accountancy Doctoral Program Information from Jim Hasselback ---
http://www.jrhasselback.com/AtgDoctInfo.html 

Especially note the table of the entire history of accounting doctoral graduates for all AACSB universities in the U.S. ---
http://www.jrhasselback.com/AtgDoct/XDocChrt.pdf
In that table you can note the rise or decline (almost all declines) for each university.

Links to 91 AACSB University Doctoral Programs ---
http://www.jrhasselback.com/AtgDoct/AtgDoctProg.html

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

The AAA's Pathways Commission Accounting Education Initiatives Make National News
Accountics Scientists Should Especially Note the First Recommendation

"Accounting for Innovation," by Elise Young, Inside Higher Ed, July 31, 2012 ---
http://www.insidehighered.com/news/2012/07/31/updating-accounting-curriculums-expanding-and-diversifying-field

Accounting programs should promote curricular flexibility to capture a new generation of students who are more technologically savvy, less patient with traditional teaching methods, and more wary of the career opportunities in accounting, according to a report released today by the Pathways Commission, which studies the future of higher education for accounting.

In 2008, the U.S. Treasury Department's  Advisory Committee on the Auditing Profession recommended that the American Accounting Association and the American Institute of Certified Public Accountants form a commission to study the future structure and content of accounting education, and the Pathways Commission was formed to fulfill this recommendation and establish a national higher education strategy for accounting.

In the report, the commission acknowledges that some sporadic changes have been adopted, but it seeks to put in place a structure for much more regular and ambitious changes.

The report includes seven recommendations:

According to the report, its two sponsoring organizations -- the American Accounting Association and the American Institute of Certified Public Accountants -- will support the effort to carry out the report's recommendations, and they are finalizing a strategy for conducting this effort.

Hsihui Chang, a professor and head of Drexel University’s accounting department, said colleges must prepare students for the accounting field by encouraging three qualities: integrity, analytical skills and a global viewpoint.

“You need to look at things in a global scope,” he said. “One thing we’re always thinking about is how can we attract students from diverse groups?” Chang said the department’s faculty comprises members from several different countries, and the university also has four student organizations dedicated to accounting -- including one for Asian students and one for Hispanic students.

He said the university hosts guest speakers and accounting career days to provide information to prospective accounting students about career options: “They find out, ‘Hey, this seems to be quite exciting.’ ”

Jimmy Ye, a professor and chair of the accounting department at Baruch College of the City University of New York, wrote in an email to Inside Higher Ed that his department is already fulfilling some of the report’s recommendations by inviting professionals from accounting firms into classrooms and bringing in research staff from accounting firms to interact with faculty members and Ph.D. students.

Ye also said the AICPA should collect and analyze supply and demand trends in the accounting profession -- but not just in the short term. “Higher education does not just train students for getting their first jobs,” he wrote. “I would like to see some study on the career tracks of college accounting graduates.”

Mohamed Hussein, a professor and head of the accounting department at the University of Connecticut, also offered ways for the commission to expand its recommendations. He said the recommendations can’t be fully put into practice with the current structure of accounting education.

“There are two parts to this: one part is being able to have an innovative curriculum that will include changes in technology, changes in the economics of the firm, including risk, international issues and regulation,” he said. “And the other part is making sure that the students will take advantage of all this innovation.”

The university offers courses on some of these issues as electives, but it can’t fit all of the information in those courses into the major’s required courses, he said.

Continued in article

Jensen Comment
This is one of the most important initiatives to emerge from the AAA in recent years.

I would like to be optimistic, but change will be very slow. President Wilson, who was also an PhD professor, once remarked that it's easier to move a cemetery than to change a university.

It is easier to move a cemetery than to affect a change in curriculum.
Woodrow Wilson

President of Princeton University 1902-1910
President of the United States 1913-1921

And in the 21st Century you can imagine the lawsuits that would clog the courts if a town tried to move a cemetery.

I think most graduates of accounting doctoral programs that are also chosen to serve as referees of submissions to TAR, JAR, and JAE are econometricians, psychometricians, and mathematicians. The problem is not so much the quality of the referees on accountics submissions.

And some, albeit not all, TAR, JAR, and JAE referees have backgrounds in accounting. The problem is that their study of accounting ended before they started accounting doctoral programs. Therein lies the problem with the incredibly shrinking accounting doctoral programs. These programs are shunned by students seeking to become accounting PhDs instead of social science and mathematics PhDs ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms 

It's not so much that accountics science is second rate. It was second rate in the 1960s and the early 1970s, but that is no longer the case in the 21st Century. The problem is that accountics science became the only track in literally all of the accounting doctoral programs if universities accredited by the AACSB.

And accountings science dominates promotion and tenure tracks of our research universities since the leading academic accounting research journals will not publish submissions that are not in accountics science or even accept commentaries that challenge the findings of submissions that are published --- http://www.trinity.edu/rjensen/TheoryTAR.htm 

The appeal of the 2012 Pathways Commission commission put simply is to open up alternate research tracks in our doctoral programs to students who want to study accounting in those programs. Also there's an appeal for our top tenure track research journals to be more open to alternate research methodology such as case study research, field study research, and accounting history research.

Dan Stone was correct in his quotation from Pogo. The only thing that stands in the way of implementation of the 2012 Pathways Commission initiatives is us. And we're resisting changing doctoral programs in ways that would make doctoral students and possibly their advisers leave campus to collect and analyze data. Horrors! Who wants to mingle with real-world practicing accountants at anything other than cocktail parties? http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

September 7, 2012 message from David Johnstone

Dear all, it always seemed to me that statistics in medicine had a level of earnestness and expert input that you would expect in a field where results cost so much to produce and often hugely matter, both in human welfare and potential income/litigation. Many professional statisticians work in medicine and biology generally, and the journal Biometrika is extremely high standard. There are many cases of applied medical statisticians publishing major pure theory papers in stats theory journals, and also textbooks that become standard references in statistics departments. In econometrics there are a few such people (e.g. Zellner). Some techniques apply really well to an applied field and get developed there rather than in their "home" field. I think discrete choice models were very largely developed in econometrics (and their software was written there too).

As a strategy for empirical researchers in accounting, it seems to me that enlisting help from pure statisticians is a clever way to do new or better work. If you glance at medical journals you often see joint papers written by a medico and a statistician from different departments and buildings on the campus. R.A. Fisher developed much of modern statistical theory because he was an agricultural scientist who needed to design and interpret experiments. Gosset of "Student's t-test" was a brewery researcher, who wanted valid interpretations of his sample observations.

There are suggestions these days that drug companies have got influence over some medical research programs but the basic laws of nature, and the fact that a really bad drug will tend to be be found out in a "natural experiment" once it's on the market, must be in medical researchers minds constantly. Publication in these circumstances is only the start of the story.

September 7,  2012 reply from Bob Jensen

Hi David,

I agree fully with everything you said, although outsourcing accountics science research to non-accounting quants is not likely to happen since there are virtually no research grants from government or industry for accountics science research.

And we must face up to the fact that statistical research in medicine (e.g., in epidemiology and drug testing) is only part of all of medical research. In addition there is a tremendous proportion of implementation research in medicine intended to improve diagnosis (e.g., artificial intelligence and virus discovery in biology and genetics) and treatment (e.g., new surgical techniques and prostheses)

What is lacking in accountics science are the components of diagnosis and treatment of benefit to practicing accountants. This of course was the main point of Harvard's Bob Kaplan in his fabulous 2010 AAA plenary session presentation when he implied that accountics scientists only focused on narrow research akin to epidemiology research in medicine.

In any case, until accountics scientists have access to serious research grant money (including contributions to university overhead), I don't think there will be much accountics scientist research outsourcing to statistical experts.

It is also interesting how anthropology took much of the statistical research out of the academic side of their discipline.

Anthropology Without Science: A new long-range plan for the American Anthropological Association that omits the word “science” from the organization's vision for its future has exposed fissures in the discipline ---
http://www.trinity.edu/rjensen/HigherEdControversies.htm#AntropologyNonScience

I'm not proposing that academic accountants go to the extremes of having accounting research without science. What I am proposing is that we have some alternate tracks in accountancy doctoral programs and leading accounting research journals. This is also what the Pathways Commission is seeking.


Respectfully,
Bob Jensen

 

 

Bob Jensen's threads on Higher Education Controversies and Need for Change ---
http://www.trinity.edu/rjensen/HigherEdControversies.htm

The sad state of accountancy doctoral programs ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

July 31, 2012 reply from Paul Williams

Bob, A good place to start is to jettison pretenses of accounting being a science. As Anthony Hopwood noted in his presidential address, accounting is a practice. The tools of science are certainly useful, but using those tools to investigate accounting problems is quite a different matter than claiming that accounting is a science. Teleology doesn't enter the picture in the sciences -- nature is governed by laws, not purposes. Accounting is nothing but a purposeful activity and must (as Jagdish has eloquently noted here and in his Critical Perspectives on Accounting article) deal with values, law and ethics. As Einstein said, "In nature there are no rewards or punishments, only consequences." For a social practice like accounting to pretend there are only consequences (as if economics was a science that deals only with "natural kinds) has been a major failing of the academy in fulfilling its responsibilities to a discipline that also claims to be a profession. In spite of a "professional economist's" claims made here that economics is a science, there is quite some controversy over that even within the economic community. Ha-Joon Chang, another professional economist at Cambridge U. had this to say about the economics discipline: "Recognizing that the boundaries of the market are ambiguous and cannot be determined in an objective way lets us realize that economics is not a science like physics or chemistry, but a political exercise. Free-market economists may want you to believe that the correct boundaries of the market can be scientifically determined, but this is incorrect. If the boundaries of what you are studying cannot be scientifically determined what you are doing is not a science (23 Things They Don't Tell You About Capitalism, p. 10)." The silly persistence of professional accountants in asserting that accounting is apolitical and aethical may be a rationalization they require, but for academics to harbor the same beliefs seems to be a decidedly unscientific posture to take. In one of Ed Arrington's articles published some time ago, he argued that accounting's pretenses of being scientific are risible. As he said (as near as I can recall): "Watching the positive accounting show, Einstein's gods must be rolling in the aisles."

 


"Science-Driven Innovation: The Final Frontier," by Donald Ingber, Chronicle of Higher Education, November 4, 2013 ---
http://chronicle.com/article/Science-Driven-Innovation-The/142785/?cid=wc&utm_source=wc&utm_medium=en

There has been a great deal of discussion recently—much of it fraught with frustration—about the challenges facing our nation's academic communities: How do we support basic curiosity-driven research and maintain our position as the global leader in innovation and technology at a time of rapidly dwindling government funds? This dilemma was at the heart of a workshop convened by the National Academies that I attended in September in Washington.

One potential solution, much discussed at the conference, is through the creation of a new model of transdisciplinary research that pulls together investigators from many disciplines to focus, or converge, on high-value, near-term goals. This excites the industrial sector because it generates information that can more quickly translate into commercial innovation, but many people in the scientific community are frankly terrified by this approach. They feel that focusing on solving specific problems in the short term could steal funds from fundamental, investigator-driven research that delves into new terrain—essentially, the scientific equivalent of Captain Kirk's "final frontier"—and which often uncovers high-value problems and solutions that no one knew existed.

There is a solution to this conundrum. I serve as founding director of the Wyss Institute for Biologically Inspired Engineering at Harvard University, which develops engineering innovations by emulating how nature builds. With support from a major philanthropic gift, and from visionary leadership at Harvard and our affiliated hospitals and universities, we developed a new model of innovation, collaboration, and technology translation that has attracted more than $125-million in research support from federal agencies, private foundations, and for-profit companies.

Continued in article

Jensen Comment
Some of this article applies to accountics science. In recent decades accountics scientists have discovered virtually zero inventions for the practicing side of the accounting and business profession ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Inventors

Practitioners literally ignore the findings of accountics science, findings that they either think discover the obvious or discover irrelevant findings of little or little use to the profession:
Essays on thhe State of Accounting Scholarship
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays
Especially note the Cargo Cult Accountics Scientists
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#CargoCult

One potential solution  ... is through the creation of a new model of transdisciplinary research that pulls together investigators from many disciplines to focus, or converge, on high-value, near-term goals. This excites the industrial sector because it generates information that can more quickly translate into commercial innovation, but many people in the scientific community are frankly terrified by this approach. They feel that focusing on solving specific problems in the short term could steal funds from fundamental, investigator-driven research that delves into new terrain—essentially, the scientific equivalent of Captain Kirk's "final frontier"—and which often uncovers high-value problems and solutions that no one knew existed.

What is different about accountics science versus real science is that in real science "this excites the industrial sector because it generates information that can more quickly translate into commercial innovation," Not so in accountics science. The track record to date of accountics scientists in generating findings tht translate into professional innovation is so lousy it is doubtful that an accountics science initiative for similar "transdisciplinary research" is not likely to generate much excitement among accounting practitioners.

However, I would applaud loudly if accountics scientists would make an attempt to excite the profession of accountancy with a similar proposal for "transdisciplinary research." But I don't have much hope.

I wrote the following on December 1, 2004 at
http://www.trinity.edu/rjensen//theory/00overview/theory01.htm#AcademicsVersusProfession

Faculty interest in a professor’s “academic” research may be greater for a number of reasons. Academic research fits into a methodology that other professors like to hear about and critique. Since academic accounting and finance journals are methodology driven, there is potential benefit from being inspired to conduct a follow up study using the same or similar methods. In contrast, practitioners are more apt to look at relevant (big) problems for which there are no research methods accepted by the top journals.

Accounting Research Farmers Are More Interested in Their Tractors Than in Their Harvests

For a long time I’ve argued that top accounting research journals are just not interested in the relevance of their findings (except in the areas of tax and AIS). If the journals were primarily interested in the findings themselves, they would abandon their policies about not publishing replications of published research findings. If accounting researchers were more interested in relevance, they would conduct more replication studies. In countless instances in our top accounting research journals, the findings themselves just aren’t interesting enough to replicate. This is something that I attacked at http://www.trinity.edu/rjensen/book02q4.htm#Replication

At one point back in the 1980s there was a chance for accounting programs that were becoming “Schools of Accountancy” to become more like law schools and to have their elite professors become more closely aligned with the legal profession. Law schools and top law journals are less concerned about science than they are about case methodology driven by the practice of law. But the elite professors of accounting who already had vested interest in scientific methodology (e.g., positivism) and analytical modeling beat down case methodology. I once heard Bob Kaplan say to an audience that no elite accounting research journal would publish his case research. Science methodologies work great in the natural sciences. They are problematic in the psychology and sociology. They are even more problematic in the professions of accounting, law, journalism/communications, and political “science.”


Recall that Bill Sharpe of CAPM fame and controversy is a Nobel Laureate ---
http://en.wikipedia.org/wiki/William_Forsyth_Sharpe

"Don’t Over-Rely on Historical Data to Forecast Future Returns," by Charles Rotblut and William Sharpe, AAII Journal, October 2014 ---
http://www.aaii.com/journal/article/dont-over-rely-on-historical-data-to-forecast-future-returns?adv=yes

Jensen Comment
The same applies to not over-relying on historical data in valuation. My favorite case study that I used for this in teaching is the following:
Questrom vs. Federated Department Stores, Inc.:  A Question of Equity Value," by University of Alabama faculty members by Gary Taylor, William Sampson, and Benton Gup, May 2001 edition of Issues in Accounting Education ---
http://www.trinity.edu/rjensen/roi.htm

Jensen Comment
I want to especially thank David Stout, Editor of the May 2001 edition of Issues in Accounting Education.  There has been something special in all the editions edited by David, but the May edition is very special to me.  All the articles in that edition are helpful, but I want to call attention to three articles that I will use intently in my graduate Accounting Theory course.

Bob Jensen's threads on accounting theory ---
http://www.trinity.edu/rjensen/Theory01.htm


"Devil's Advocate: The Most Incorrect Beliefs of Accounting Experts," by Sudipta Basu, SSRN, December 1, 2013 ---
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2426581

This commentary reflects the views of a panel of six experts tasked with writing an essay on the most incorrect beliefs of accounting experts. The title provides ample motivation for this discussion – to document the views of some thought leaders in accounting research on a seldom-debated and mostly ignored issue – incorrect beliefs. While each essay offers a thoughtful message on its own, in combination they reflect an even stronger view, and offer sound advice for accountants of all stripes and persuasions.

Free Download --- http://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID2426581_code105808.pdf?abstractid=2426581&mirid=1

Accounting Horizons, Vol. 27, No. 4, 2013

The video of this presentation, as well as the presentations for the other commentaries in thisissue is available by clicking on the link below.
Devils_Advocate:
http://dx.doi.org/10.2308/acch-10364.s1


Unlike real scientists, accountics scientists seldom replicate (reproduce) published accountics science research by the exacting standards real science ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#Replication

Stationary Process --- http://en.wikipedia.org/wiki/Stationary_process

Multicollinearity --- http://en.wikipedia.org/wiki/Multicollinearity

Robust Statistics --- http://en.wikipedia.org/wiki/Robust_statistics

Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normally distributed. Robust statistical methods have been developed for many common problems, such as estimating location, scale and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from parametric distributions. For example, robust methods work well for mixtures of two normal distributions with different standard-deviations, for example, one and three; under this model, non-robust methods like a t-test work badly.

Continued in article

Jensen Comment
To this might be added that models that grow adaptively by adding components in sequencing are not robust if the mere order in which components are added changes the outcome of the ultimate model.

David Johnstone wrote the following:

Indeed if you hold H0 the same and keep changing the model, you will eventually (generally soon) get a significant result, allowing “rejection of H0 at 5%”, not because H0 is necessarily false but because you have built upon a false model (of which there are zillions, obviously).

Jensen Comment
I spent a goodly part of two think-tank years trying in vain to invent robust adaptive regression and clustering models where I tried to adaptively reduce modeling error by adding missing variables and covariance components. To my great frustration I found that adaptive regression and cluster analysis seems to almost always suffer from lack of robustness. Different outcomes can be obtained simply because of the order in which new components are added to the model, i.e., ordering of inputs changes the model solutions.

Accountics scientists who declare they have "significant results" may also have non-robust results that they fail to analyze.

When you combine issues on non-robustness with the impossibility of testing for covariance you have a real mess in accountics science and econometrics in general.

It's relatively uncommon for accountics scientists to criticize each others' published works. A notable exception is as follows:
"Selection Models in Accounting Research," by Clive S. Lennox, Jere R. Francis, and Zitian Wang,  The Accounting Review, March 2012, Vol. 87, No. 2, pp. 589-616.

This study explains the challenges associated with the Heckman (1979) procedure to control for selection bias, assesses the quality of its application in accounting research, and offers guidance for better implementation of selection models. A survey of 75 recent accounting articles in leading journals reveals that many researchers implement the technique in a mechanical way with relatively little appreciation of important econometric issues and problems surrounding its use. Using empirical examples motivated by prior research, we illustrate that selection models are fragile and can yield quite literally any possible outcome in response to fairly minor changes in model specification. We conclude with guidance on how researchers can better implement selection models that will provide more convincing evidence on potential selection bias, including the need to justify model specifications and careful sensitivity analyses with respect to robustness and multicollinearity.

. . .

CONCLUSIONS

Our review of the accounting literature indicates that some studies have implemented the selection model in a questionable manner. Accounting researchers often impose ad hoc exclusion restrictions or no exclusion restrictions whatsoever. Using empirical examples and a replication of a published study, we demonstrate that such practices can yield results that are too fragile to be considered reliable. In our empirical examples, a researcher could obtain quite literally any outcome by making relatively minor and apparently innocuous changes to the set of exclusionary variables, including choosing a null set. One set of exclusion restrictions would lead the researcher to conclude that selection bias is a significant problem, while an alternative set involving rather minor changes would give the opposite conclusion. Thus, claims about the existence and direction of selection bias can be sensitive to the researcher's set of exclusion restrictions.

Our examples also illustrate that the selection model is vulnerable to high levels of multicollinearity, which can exacerbate the bias that arises when a model is misspecified (Thursby 1988). Moreover, the potential for misspecification is high in the selection model because inferences about the existence and direction of selection bias depend entirely on the researcher's assumptions about the appropriate functional form and exclusion restrictions. In addition, high multicollinearity means that the statistical insignificance of the inverse Mills' ratio is not a reliable guide as to the absence of selection bias. Even when the inverse Mills' ratio is statistically insignificant, inferences from the selection model can be different from those obtained without the inverse Mills' ratio. In this situation, the selection model indicates that it is legitimate to omit the inverse Mills' ratio, and yet, omitting the inverse Mills' ratio gives different inferences for the treatment variable because multicollinearity is then much lower.

In short, researchers are faced with the following trade-off. On the one hand, selection models can be fragile and suffer from multicollinearity problems, which hinder their reliability. On the other hand, the selection model potentially provides more reliable inferences by controlling for endogeneity bias if the researcher can find good exclusion restrictions, and if the models are found to be robust to minor specification changes. The importance of these advantages and disadvantages depends on the specific empirical setting, so it would be inappropriate for us to make a general statement about when the selection model should be used. Instead, researchers need to critically appraise the quality of their exclusion restrictions and assess whether there are problems of fragility and multicollinearity in their specific empirical setting that might limit the effectiveness of selection models relative to OLS.

Another way to control for unobservable factors that are correlated with the endogenous regressor (D) is to use panel data. Though it may be true that many unobservable factors impact the choice of D, as long as those unobservable characteristics remain constant during the period of study, they can be controlled for using a fixed effects research design. In this case, panel data tests that control for unobserved differences between the treatment group (D = 1) and the control group (D = 0) will eliminate the potential bias caused by endogeneity as long as the unobserved source of the endogeneity is time-invariant (e.g., Baltagi 1995; Meyer 1995; Bertrand et al. 2004). The advantages of such a difference-in-differences research design are well recognized by accounting researchers (e.g., Altamuro et al. 2005; Desai et al. 2006; Hail and Leuz 2009; Hanlon et al. 2008). As a caveat, however, we note that the time-invariance of unobservables is a strong assumption that cannot be empirically validated. Moreover, the standard errors in such panel data tests need to be corrected for serial correlation because otherwise there is a danger of over-rejecting the null hypothesis that D has no effect on Y (Bertrand et al. 2004).10

Finally, we note that there is a recent trend in the accounting literature to use samples that are matched based on their propensity scores (e.g., Armstrong et al. 2010; Lawrence et al. 2011). An advantage of propensity score matching (PSM) is that there is no MILLS variable and so the researcher is not required to find valid Z variables (Heckman et al. 1997; Heckman and Navarro-Lozano 2004). However, such matching has two important limitations. First, selection is assumed to occur only on observable characteristics. That is, the error term in the first stage model is correlated with the independent variables in the second stage (i.e., u is correlated with X and/or Z), but there is no selection on unobservables (i.e., u and υ are uncorrelated). In contrast, the purpose of the selection model is to control for endogeneity that arises from unobservables (i.e., the correlation between u and υ). Therefore, propensity score matching should not be viewed as a replacement for the selection model (Tucker 2010).

A second limitation arises if the treatment variable affects the company's matching attributes. For example, suppose that a company's choice of auditor affects its subsequent ability to raise external capital. This would mean that companies with higher quality auditors would grow faster. Suppose also that the company's characteristics at the time the auditor is first chosen cannot be observed. Instead, we match at some stacked calendar time where some companies have been using the same auditor for 20 years and others for not very long. Then, if we matched on company size, we would be throwing out the companies that have become large because they have benefited from high-quality audits. Such companies do not look like suitable “matches,” insofar as they are much larger than the companies in the control group that have low-quality auditors. In this situation, propensity matching could bias toward a non-result because the treatment variable (auditor choice) affects the company's matching attributes (e.g., its size). It is beyond the scope of this study to provide a more thorough assessment of the advantages and disadvantages of propensity score matching in accounting applications, so we leave this important issue to future research.

Jensen Comment
To this we might add that it's impossible in these linear models to test for multicollinearity.

"Can You Actually TEST for Multicollinearity?" --- Click Here
http://davegiles.blogspot.com/2013/06/can-you-actually-test-for.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2FjjOHE+%28Econometrics+Beat%3A+Dave+Giles%27+Blog%29

. . .

Now, let's return to the "problem" of multicollinearity.

 
What do we mean by this term, anyway? This turns out to be the key question!

 
Multicollinearity is a phenomenon associated with our particular sample of data when we're trying to estimate a regression model. Essentially, it's a situation where there is insufficient information in the sample of data to enable us to enable us to draw "reliable" inferences about the individual parameters of the underlying (population) model.


I'll be elaborating more on the "informational content" aspect of this phenomenon in a follow-up post. Yes, there are various sample measures that we can compute and report, to help us gauge how severe this data "problem" may be. But they're not statistical tests, in any sense of the word

 

Because multicollinearity is a characteristic of the sample, and not a characteristic of the population, you should immediately be suspicious when someone starts talking about "testing for multicollinearity". Right?


Apparently not everyone gets it!


There's an old paper by Farrar and Glauber (1967) which, on the face of it might seem to take a different stance. In fact, if you were around when this paper was published (or if you've bothered to actually read it carefully), you'll know that this paper makes two contributions. First, it provides a very sensible discussion of what multicollinearity is all about. Second, the authors take some well known results from the statistics literature (notably, by Wishart, 1928; Wilks, 1932; and Bartlett, 1950) and use them to give "tests" of the hypothesis that the regressor matrix, X, is orthogonal.


How can this be? Well, there's a simple explanation if you read the Farrar and Glauber paper carefully, and note what assumptions are made when they "borrow" the old statistics results. Specifically, there's an explicit (and necessary) assumption that in the population the X matrix is random, and that it follows a multivariate normal distribution.


This assumption is, of course totally at odds with what is usually assumed in the linear regression model! The "tests" that Farrar and Glauber gave us aren't really tests of multicollinearity in the sample. Unfortunately, this point wasn't fully appreciated by everyone.


There are some sound suggestions in this paper, including looking at the sample multiple correlations between each regressor, and all of the other regressors. These, and other sample measures such as variance inflation factors, are useful from a diagnostic viewpoint, but they don't constitute tests of "zero multicollinearity".


So, why am I even mentioning the Farrar and Glauber paper now?


Well, I was intrigued to come across some STATA code (Shehata, 2012) that allows one to implement the Farrar and Glauber "tests". I'm not sure that this is really very helpful. Indeed, this seems to me to be a great example of applying someone's results without understanding (bothering to read?) the assumptions on which they're based!


Be careful out there - and be highly suspicious of strangers bearing gifts!


 
References

 
Bartlett, M. S., 1950. Tests of significance in factor analysis. British Journal of Psychology, Statistical Section, 3, 77-85.

 
Farrar, D. E. and R. R. Glauber, 1967. Multicollinearity in regression analysis: The problem revisited.  Review of Economics and Statistics, 49, 92-107.

 
Shehata, E. A. E., 2012. FGTEST: Stata module to compute Farrar-Glauber Multicollinearity Chi2, F, t tests.

Wilks, S. S., 1932. Certain generalizations in the analysis of variance. Biometrika, 24, 477-494.

Wishart, J., 1928. The generalized product moment distribution in samples from a multivariate normal population. Biometrika, 20A, 32-52.

 

Thank you Jagdish for adding another doubt in to the validity of more than four decades of accountics science worship.
"Weak statistical standards implicated in scientific irreproducibility: One-quarter of studies that meet commonly used statistical cutoff may be false." by Erika Check Hayden, Nature, November 11, 2013 ---
http://www.nature.com/news/weak-statistical-standards-implicated-in-scientific-irreproducibility-1.14131

 The plague of non-reproducibility in science may be mostly due to scientists’ use of weak statistical tests, as shown by an innovative method developed by statistician Valen Johnson, at Texas A&M University in College Station.

Johnson compared the strength of two types of tests: frequentist tests, which measure how unlikely a finding is to occur by chance, and Bayesian tests, which measure the likelihood that a particular hypothesis is correct given data collected in the study. The strength of the results given by these two types of tests had not been compared before, because they ask slightly different types of questions.

So Johnson developed a method that makes the results given by the tests — the P value in the frequentist paradigm, and the Bayes factor in the Bayesian paradigm — directly comparable. Unlike frequentist tests, which use objective calculations to reject a null hypothesis, Bayesian tests require the tester to define an alternative hypothesis to be tested — a subjective process. But Johnson developed a 'uniformly most powerful' Bayesian test that defines the alternative hypothesis in a standard way, so that it “maximizes the probability that the Bayes factor in favor of the alternate hypothesis exceeds a specified threshold,” he writes in his paper. This threshold can be chosen so that Bayesian tests and frequentist tests will both reject the null hypothesis for the same test results.

Johnson then used these uniformly most powerful tests to compare P values to Bayes factors. When he did so, he found that a P value of 0.05 or less — commonly considered evidence in support of a hypothesis in fields such as social science, in which non-reproducibility has become a serious issue corresponds to Bayes factors of between 3 and 5, which are considered weak evidence to support a finding.

False positives

Indeed, as many as 17–25% of such findings are probably false, Johnson calculates1. He advocates for scientists to use more stringent P values of 0.005 or less to support their findings, and thinks that the use of the 0.05 standard might account for most of the problem of non-reproducibility in science — even more than other issues, such as biases and scientific misconduct.

“Very few studies that fail to replicate are based on P values of 0.005 or smaller,” Johnson says.

Some other mathematicians said that though there have been many calls for researchers to use more stringent tests2, the new paper makes an important contribution by laying bare exactly how lax the 0.05 standard is.

“It shows once more that standards of evidence that are in common use throughout the empirical sciences are dangerously lenient,” says mathematical psychologist Eric-Jan Wagenmakers of the University of Amsterdam. “Previous arguments centered on ‘P-hacking’, that is, abusing standard statistical procedures to obtain the desired results. The Johnson paper shows that there is something wrong with the P value itself.”

Other researchers, though, said it would be difficult to change the mindset of scientists who have become wedded to the 0.05 cutoff. One implication of the work, for instance, is that studies will have to include more subjects to reach these more stringent cutoffs, which will require more time and money.

“The family of Bayesian methods has been well developed over many decades now, but somehow we are stuck to using frequentist approaches,” says physician John Ioannidis of Stanford University in California, who studies the causes of non-reproducibility. “I hope this paper has better luck in changing the world.”

574 Shields Against Validity Challenges in Plato's Cave
An Appeal for Replication and Commentaries in Accountics Science
http://www.trinity.edu/rjensen/TheoryTAR.htm


"Is Economics a Science," by Robert Shiller, QFinance, November 8, 2013 --- Click Here
http://www.qfinance.com/blogs/robert-j. shiller/2013/11/08/nobel-is-economics-a-science?utm_source=November+2013+email&utm_medium=Email&utm_content=Blog2&utm_campaign=EmailNov13

NEW HAVEN – I am one of the winners of this year’s Nobel Memorial Prize in Economic Sciences, which makes me acutely aware of criticism of the prize by those who claim that economics – unlike chemistry, physics, or medicine, for which Nobel Prizes are also awarded – is not a science. Are they right?

One problem with economics is that it is necessarily focused on policy, rather than discovery of fundamentals. Nobody really cares much about economic data except as a guide to policy: economic phenomena do not have the same intrinsic fascination for us as the internal resonances of the atom or the functioning of the vesicles and other organelles of a living cell. We judge economics by what it can produce. As such, economics is rather more like engineering than physics, more practical than spiritual.

There is no Nobel Prize for engineering, though there should be. True, the chemistry prize this year looks a bit like an engineering prize, because it was given to three researchers – Martin Karplus, Michael Levitt, and Arieh Warshel – “for the development of multiscale models of complex chemical systems” that underlie the computer programs that make nuclear magnetic resonance hardware work. But the Nobel Foundation is forced to look at much more such practical, applied material when it considers the economics prize.

The problem is that, once we focus on economic policy, much that is not science comes into play. Politics becomes involved, and political posturing is amply rewarded by public attention. The Nobel Prize is designed to reward those who do not play tricks for attention, and who, in their sincere pursuit of the truth, might otherwise be slighted.
 

The pursuit of truth


Why is it called a prize in “economic sciences”, rather than just “economics”? The other prizes are not awarded in the “chemical sciences” or the “physical sciences”.

 

Fields of endeavor that use “science” in their titles tend to be those that get masses of people emotionally involved and in which crackpots seem to have some purchase on public opinion. These fields have “science” in their names to distinguish them from their disreputable cousins.

The term political science first became popular in the late eighteenth century to distinguish it from all the partisan tracts whose purpose was to gain votes and influence rather than pursue the truth. Astronomical science was a common term in the late nineteenth century, to distinguish it from astrology and the study of ancient myths about the constellations. Hypnotic science was also used in the nineteenth century to distinguish the scientific study of hypnotism from witchcraft or religious transcendentalism.
 

Crackpot counterparts


There was a need for such terms back then, because their crackpot counterparts held much greater sway in general discourse. Scientists had to announce themselves as scientists.

 

In fact, even the term chemical science enjoyed some popularity in the nineteenth century – a time when the field sought to distinguish itself from alchemy and the promotion of quack nostrums. But the need to use that term to distinguish true science from the practice of impostors was already fading by the time the Nobel Prizes were launched in 1901.

Similarly, the terms astronomical science and hypnotic science mostly died out as the twentieth century progressed, perhaps because belief in the occult waned in respectable society. Yes, horoscopes still persist in popular newspapers, but they are there only for the severely scientifically challenged, or for entertainment; the idea that the stars determine our fate has lost all intellectual currency. Hence there is no longer any need for the term “astronomical science.”
 

Pseudoscience?


Critics of “economic sciences” sometimes refer to the development of a “pseudoscience” of economics, arguing that it uses the trappings of science, like dense mathematics, but only for show. For example, in his 2004 book,
 Fooled by Randomness, Nassim Nicholas Taleb said of economic sciences:
 
“You can disguise charlatanism under the weight of equations, and nobody can catch you since there is no such thing as a controlled experiment.”

But physics is not without such critics, too. In his 2004 book,
The Trouble with Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next, Lee Smolin reproached the physics profession for being seduced by beautiful and elegant theories (notably string theory) rather than those that can be tested by experimentation. Similarly, in his 2007 book, Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law, Peter Woit accused physicists of much the same sin as mathematical economists are said to commit.


 

Exposing the charlatans


My belief is tha
t economics is somewhat more vulnerable than the physical sciences to models whose validity will never be clear, because the necessity for approximation is much stronger than in the physical sciences, especially given that the models describe people rather than magnetic resonances or fundamental particles. People can just change their minds and behave completely differently. They even have neuroses and identity problems - complex phenomena that the field of behavioral economics is finding relevant to understand economic outcomes.

 

But all the mathematics in economics is not, as Taleb suggests, charlatanism. Economics has an important quantitative side, which cannot be escaped. The challenge has been to combine its mathematical insights with the kinds of adjustments that are needed to make its models fit the economy’s irreducibly human element.

The advance of behavioral economics is not fundamentally in conflict with mathematical economics, as some seem to think, though it may well be in conflict with some currently fashionable mathematical economic models. And, while economics presents its own methodological problems, the basic challenges facing researchers are not fundamentally different from those faced by researchers in other fields. As economics develops, it will broaden its repertory of methods and sources of evidence, the science will become stronger, and the charlatans will be exposed.

 

Real Science Versus Pseudo Science --- Click Here

 


"A Pragmatist Defence of Classical Financial Accounting Research," by Brian A. Rutherford, Abacus, Volume 49, Issue 2, pages 197–218, June 2013 ---
http://onlinelibrary.wiley.com/doi/10.1111/abac.12003/abstract

The reason for the disdain in which classical financial accounting research has come to held by many in the scholarly community is its allegedly insufficiently scientific nature. While many have defended classical research or provided critiques of post-classical paradigms, the motivation for this paper is different. It offers an epistemologically robust underpinning for the approaches and methods of classical financial accounting research that restores its claim to legitimacy as a rigorous, systematic and empirically grounded means of acquiring knowledge. This underpinning is derived from classical philosophical pragmatism and, principally, from the writings of John Dewey. The objective is to show that classical approaches are capable of yielding serviceable, theoretically based solutions to problems in accounting practice.

Jensen Comment
When it comes to "insufficient scientific nature" of classical accounting research I should note yet once again that accountics science never attained the status of real science where the main criteria are scientific searches for causes and an obsession with replication (reproducibility) of findings.

Accountics science is overrated because it only achieved the status of a psuedo science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science

"Research on Accounting Should Learn From the Past" by Michael H. Granof and Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

The unintended consequence has been that interesting and researchable questions in accounting are essentially being ignored. By confining the major thrust in research to phenomena that can be mathematically modeled or derived from electronic databases, academic accountants have failed to advance the profession in ways that are expected of them and of which they are capable.

Academic research has unquestionably broadened the views of standards setters as to the role of accounting information and how it affects the decisions of individual investors as well as the capital markets. Nevertheless, it has had scant influence on the standards themselves.

Continued in article

"Research on Accounting Should Learn From the Past," by Michael H. Granof and
 Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

. . .

The narrow focus of today's research has also resulted in a disconnect between research and teaching. Because of the difficulty of conducting publishable research in certain areas — such as taxation, managerial accounting, government accounting, and auditing — Ph.D. candidates avoid choosing them as specialties. Thus, even though those areas are central to any degree program in accounting, there is a shortage of faculty members sufficiently knowledgeable to teach them.

To be sure, some accounting research, particularly that pertaining to the efficiency of capital markets, has found its way into both the classroom and textbooks — but mainly in select M.B.A. programs and the textbooks used in those courses. There is little evidence that the research has had more than a marginal influence on what is taught in mainstream accounting courses.

What needs to be done? First, and most significantly, journal editors, department chairs, business-school deans, and promotion-and-tenure committees need to rethink the criteria for what constitutes appropriate accounting research. That is not to suggest that they should diminish the importance of the currently accepted modes or that they should lower their standards. But they need to expand the set of research methods to encompass those that, in other disciplines, are respected for their scientific standing. The methods include historical and field studies, policy analysis, surveys, and international comparisons when, as with empirical and analytical research, they otherwise meet the tests of sound scholarship.

Second, chairmen, deans, and promotion and merit-review committees must expand the criteria they use in assessing the research component of faculty performance. They must have the courage to establish criteria for what constitutes meritorious research that are consistent with their own institutions' unique characters and comparative advantages, rather than imitating the norms believed to be used in schools ranked higher in magazine and newspaper polls. In this regard, they must acknowledge that accounting departments, unlike other business disciplines such as finance and marketing, are associated with a well-defined and recognized profession. Accounting faculties, therefore, have a special obligation to conduct research that is of interest and relevance to the profession. The current accounting model was designed mainly for the industrial era, when property, plant, and equipment were companies' major assets. Today, intangibles such as brand values and intellectual capital are of overwhelming importance as assets, yet they are largely absent from company balance sheets. Academics must play a role in reforming the accounting model to fit the new postindustrial environment.

Third, Ph.D. programs must ensure that young accounting researchers are conversant with the fundamental issues that have arisen in the accounting discipline and with a broad range of research methodologies. The accounting literature did not begin in the second half of the 1960s. The books and articles written by accounting scholars from the 1920s through the 1960s can help to frame and put into perspective the questions that researchers are now studying.

Continued in article


June 5, 2013 reply to a long thread by Bob Jensen

Hi Steve,

As usual, these AECM threads between you, me, and Paul Williams resolve nothing to date. TAR still has zero articles without equations unless such articles are forced upon editors like the Kaplan article was forced upon you as Senior Editor. TAR still has no commentaries about the papers it publishes and the authors make no attempt to communicate and have dialog about their research on the AECM or the AAA Commons.

I do hope that our AECM threads will continue and lead one day to when the top academic research journals do more to both encourage (1) validation (usually by speedy replication), (2) alternate methodologies, (3) more innovative research, and (4) more interactive commentaries.

I remind you that Professor Basu's essay is only one of four essays bundled together in Accounting Horizons on the topic of how to make accounting research, especially the so-called Accounting Sciience or Accountics Science or Cargo Cult science, more innovative.

The four essays in this bundle are summarized and extensively quoted at http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays 

I will try to keep drawing attention to these important essays and spend the rest of my professional life trying to bring accounting research closer to the accounting profession.

I also want to dispel the myth that accountics research is harder than making research discoveries without equations. The hardest research I can imagine (and where I failed) is to make a discovery that has a noteworthy impact on the accounting profession. I always look but never find such discoveries reported in TAR.

The easiest research is to purchase a database and beat it with an econometric stick until something falls out of the clouds. I've searched for years and find very little that has a noteworthy impact on the accounting profession. Quite often there is a noteworthy impact on other members of the Cargo Cult and doctoral students seeking to beat the same data with their sticks. But try to find a practitioner with an interest in these academic accounting discoveries?

Our latest thread leads me to such questions as:

  1. Is accounting research of inferior quality relative to other disciplines like engineering and finance?

     
  2. Are there serious innovation gaps in academic accounting research?

     
  3. Is accounting research stagnant?

     
  4. How can accounting researchers be more innovative?

     
  5. Is there an "absence of dissent" in academic accounting research?

     
  6. Is there an absence of diversity in our top academic accounting research journals and doctoral programs?

     
  7. Is there a serious disinterest (except among the Cargo Cult) and lack of validation in findings reported in our academic accounting research journals, especially TAR?

     
  8. Is there a huge communications gap between academic accounting researchers and those who toil teaching accounting and practicing accounting?

     
  9. Why do our accountics scientists virtually ignore the AECM and the AAA Commons and the Pathways Commission Report?
    http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

One fall out of this thread is that I've been privately asked to write a paper about such matters. I hope that others will compete with me in thinking and writing about these serious challenges to academic accounting research that never seem to get resolved.

Thank you Steve for sometimes responding in my threads on such issues in the AECM.

Respectfully,
Bob Jensen


Why Do Accountics Scientists Get Along So Well

To a fault I've argued that accountics scientists do not challenge each other or do replications and other validity tests of their published research ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

By comparison the real science game is much more a hard ball game of replication, critical commentary, and other validity checking. Accountics scientists have a long way to go in their quest to become more like real scientists.

 

"Casualty of the Math Wars," by Scott Jaschik, Inside Higher Ed, October 15, 2012 ---
http://www.insidehighered.com/news/2012/10/15/stanford-professor-goes-public-attacks-over-her-math-education-research

. . .

The "math wars" have raged since the 1990s. A series of reform efforts (of which Boaler's work is a part) have won support from many scholars and a growing number of school districts. But a traditionalist school (of which Milgram and Bishop are part) has pushed back, arguing that rigor and standards are being sacrificed. Both sides accuse the other of oversimplifying the other's arguments, and studies and op-eds from proponents of the various positions appear regularly in education journals and the popular press. Several mathematics education experts interviewed for this article who are supportive of Boaler and her views stressed that they did not view all, or even most, criticism from the "traditionalist" camp as irresponsible.

The essay Boaler published Friday night noted that there has been "spirited academic debate" about her ideas and those of others in mathematics education, and she says that there is nothing wrong with that.

"Milgram and Bishop have gone beyond the bounds of reasoned discourse in a campaign to systematically suppress empirical evidence that contradicts their stance," Boaler wrote. "Academic disagreement is an inevitable consequence of academic freedom, and I welcome it. However, responsible disagreement and academic bullying are not the same thing. Milgram and Bishop have engaged in a range of tactics to discredit me and damage my work which I have now decided to make public."

Some experts who have been watching the debate say that the reason this dispute is important is because Boaler's work is not based simply on a critique of traditional methods of teaching math, but because she has data to back up her views.

Keith Devlin, director of the Human Sciences and Technologies Advanced Research Institute at Stanford, said that he has "enormous respect" for Boaler, although he characterized himself as someone who doesn't know her well, but has read her work and is sympathetic to it. He said that he shares her views, but that he does so "based on my own experience and from reading the work of others," not from his own research. So he said that while he has also faced "unprofessional" attacks when he has expressed those views, he hasn't attracted the same level of criticism as has Boaler.

Of her critics, Devlin said that "I suspect they fear her because she brings hard data that threatens their view of how children should be taught mathematics." He said that the criticisms of Boaler reach "the point of character assassination."

Debating the Data

The Milgram/Bishop essay that Boaler said has unfairly damaged her reputation is called "A Close Examination of Jo Boaler's Railside Report," and appears on Milgram's Stanford website. ("Railside" refers to one of the schools Boaler studied.) The piece says that Boaler's claims are "grossly exaggerated," and yet expresses fear that they could be influential and so need to be rebutted. Under federal privacy protection requirements for work involving schoolchildren, Boaler agreed to keep confidential the schools she studied and, by extension, information about teachers and students. The Milgram/Bishop essay claims to have identified some of those schools and says this is why they were able to challenge her data.

Boaler said -- in her essay and in an interview -- that this puts her in a bind. She cannot reveal more about the schools without violating confidentiality pledges, even though she is being accused of distorting data. While the essay by Milgram and Bishop looks like a journal article, Boaler notes that it has in fact never been published, in contrast to her work, which has been subjected to peer review in multiple journals and by various funding agencies.

Further, she notes that Milgram's and Bishop's accusations were investigated by Stanford when Milgram in 2006 made a formal charge of research misconduct against her, questioning the validity of her data collection. She notes in her new essay that the charges "could have destroyed my career." Boaler said that her final copy of the initial investigation was deemed confidential by the university, but she provided a copy of the conclusions, which rejected the idea that there had been any misconduct.

Here is the conclusion of that report: "We understand that there is a currently ongoing (and apparently passionate) debate in the mathematics education field concerning the best approaches and methods to be applied in teaching mathematics. It is not our task under Stanford's policy to determine who is 'right' and who is 'wrong' in this academic debate. We do note that Dr. Boaler's responses to the questions put to her related to her report were thorough, thoughtful, and offered her scientific rationale for each of the questions underlying the allegations. We found no evidence of scientific misconduct or fraudulent behavior related to the content of the report in question. In short, we find that the allegations (such as they are) of scientific misconduct do not have substance."

Even though the only body to examine the accusations made by Milgram rejected them, and even though the Milgram/Bishop essay has never been published beyond Milgram's website, the accusations in the essay have followed Boaler all over as supporters of Milgram and Bishop cite the essay to question Boaler's ethics. For example, an article she and a co-author wrote about her research that was published in a leading journal in education research, Teachers College Record, attracted a comment that said the findings were "imaginative" and asked if they were "a prime example of data cooking." The only evidence offered: a link to the Milgram/Bishop essay.

In an interview, Boaler said that, for many years, she has simply tried to ignore what she considers to be unprofessional, unfair criticism. But she said she was prompted to speak out after thinking about the fallout from an experience this year when Irish educational authorities brought her in to consult on math education. When she wrote an op-ed in The Irish Times, a commenter suggested that her ideas be treated with "great skepticism" because they had been challenged by prominent professors, including one at her own university. Again, the evidence offered was a link to the Stanford URL of the Milgram/Bishop essay.

"This guy Milgram has this on a webpage. He has it on a Stanford site. They have a campaign that everywhere I publish, somebody puts up a link to that saying 'she makes up data,' " Boaler said. "They are stopping me from being able to do my job."

She said one reason she decided to go public is that doing so gives her a link she can use whenever she sees a link to the essay attacking her work.

Bishop did not respond to e-mail messages requesting comment about Boaler's essay. Milgram via e-mail answered a few questions about Boaler's essay. He said she inaccurately characterized a meeting they had after she arrived at Stanford. (She said he discouraged her from writing about math education.) Milgram denied engaging in "academic bullying."

He said via e-mail that the essay was prepared for publication in a journal and was scheduled to be published, but "the HR person at Stanford has some reservations because it turned out that it was too easy to do a Google search on some of the quotes in the paper and thereby identify the schools involved. At that point I had so many other things that I had to attend to that I didn't bother to make the corrections." He also said that he has heard more from the school since he wrote the essay, and that these additional discussions confirm his criticism of Boaler's work.

In an interview Sunday afternoon, Milgram said that by "HR" in the above quote, he meant "human research," referring to the office at Stanford that works to protect human subjects in research. He also said that since it was only those issues that prevented publication, his critique was in fact peer-reviewed, just not published.

Further, he said that Stanford's investigation of Boaler was not handled well, and that those on the committee considered the issue "too delicate and too hot a potato." He said he stood behind everything in the paper. As to Boaler's overall criticism of him, he said that he would "have discussions with legal people, and I'll see if there is an appropriate action to be taken, but my own inclination is to ignore it."

Milgram also rejected the idea that it was not appropriate for him to speak out on these issues as he has. He said he first got involved in raising questions about research on math education as the request of an assistant in the office of Rod Paige, who held the job of U.S. education secretary during the first term of President George W. Bush.

Ze'ev Wurman, a supporter of Milgram and Bishop, and one who has posted the link to their article elsewhere, said he wasn't bothered by its never having been published. "She is basically using the fact that it was not published to undermine its worth rather than argue the specific charges leveled there by serious academics," he said.

Critiques 'Without Merit'

E-mail requests for comment from several leading figures in mathematics education resulted in strong endorsements of Boaler's work and frustration at how she has been treated over the years.

Jeremy Kilpatrick, a professor of mathematics education at the University of Georgia who has chaired commissions on the subject for the National Research Council and the Rand Corporation, said that "I have long had great respect for Jo Boaler and her work, and I have been very disturbed that it has been attacked as faulty or disingenuous. I have been receiving multiple e-mails from people who are disconcerted at the way she has been treated by Wayne Bishop and Jim Milgram. The critiques by Bishop and Milgram of her work are totally without merit and unprofessional. I'm pleased that she has come forward at last to give her side of the story, and I hope that others will see and understand how badly she has been treated."

Alan H. Schoenfeld is the Elizabeth and Edward Conner Professor of Education at the University of California at Berkeley, and a past president of the American Educational Research Association and past vice president of the National Academy of Education. He was reached in Sweden, where he said his e-mail has been full of commentary about Boaler's Friday post. "Boaler is a very solid researcher. You don't get to be a professor at Stanford, or the Marie Curie Professor of Mathematics Education at the University of Sussex [the position she held previously], unless you do consistently high quality, peer-reviewed research."

Schoenfeld said that the discussion of Boaler's work "fits into the context of the math wars, which have sometimes been argued on principle, but in the hands of a few partisans, been vicious and vitriolic." He said that he is on a number of informal mathematics education networks, and that the response to Boaler's essay "has been swift and, most generally, one of shock and support for Boaler." One question being asked, he said, is why Boaler was investigated and no university has investigated the way Milgram and Bishop have treated her.

A spokeswoman for Stanford said the following via e-mail: "Dr. Boaler is a nationally respected scholar in the field of math education. Since her arrival more than a decade ago, Stanford has provided extensive support for Dr. Boaler as she has engaged in scholarship in this field, which is one in which there is wide-ranging academic opinion. At the same time, Stanford has carefully respected the fundamental principle of academic freedom: the merits of a position are to be determined by scholarly debate, rather than by having the university arbitrate or interfere in the academic discourse."

Boaler in Her Own Words

Here is a YouTube video of Boaler discussing and demonstrating her ideas about math education with a group of high school students in Britain.

Continued in article

 


o Accountics Scientists Ever Cheat?

"Former Harvard Psychologist Fabricated and Falsified, Report Says," by Tom Bartlett, Chronicle of Higher Education, September 5, 2012 ---
http://chronicle.com/blogs/percolator/report-says-former-harvard-psychologist-fabricated-falsified/30748

Marc Hauser was once among the big, impressive names in psychology, head of the Cognitive Evolution Laboratory at Harvard University, author of popular books like Moral Minds. That reputation unraveled when a university investigation found him responsible for eight counts of scientific misconduct, which led to his resignation last year.

Now the federal Office of Research Integrity has released its report on Hauser’s actions, determining that he fabricated and falsified results from experiments. Here is a sampling:

Hauser “neither admits nor denies” any research misconduct but, according to the report, accepts the findings. He has agreed to three years of extra scrutiny of any federally supported research he conducts, though the requirement may be moot considering that Hauser is no longer employed by a university. Hauser says in a written statement that he is currently “focusing on at-risk youth”; his LinkedIn profile lists him as a co-founder of Gamience, an e-learning company.

In the statement, Hauser calls the five years of investigation into his research “a long and painful period.” He also acknowledges making mistakes, but seems to blame his actions on being stretched too thin. “I tried to do too much, teaching courses, running a large lab of students, sitting on several editorial boards, directing the Mind, Brain & Behavior Program at Harvard, conducting multiple research collaborations, and writing for the general public,” he writes.

He also implies that some of the blame may actually belong to others in his lab. Writes Hauser: “I let important details get away from my control, and as head of the lab, I take responsibility for all errors made within the lab, whether or not I was directly involved.”

But that take—the idea that the problems were caused mainly by Hauser’s inattention—doesn’t square with the story told by those in his laboratory. A former research assistant, who was among those who blew the whistle on Hauser, writes in an e-mail that while the report “does a pretty good job of summing up what is known,” it nevertheless “leaves off how hard his co-authors, who were his at-will employees and graduate students, had to fight to get him to agree not to publish the tainted data.”

The former research assistant points out that the report takes into account only the research that was flagged by whistle-blowers. “He betrayed the trust of everyone that worked with him, and especially those of us who were under him and who should have been able to trust him,” the research assistant writes.

As detailed in this Chronicle article, several members of his laboratory double-checked Hauser’s coding of an experiment and concluded he was falsifying the results so that those results would support the hypothesis, turning a failed experiment into a success. In 2007 they brought that and other evidence to Harvard officials, who began an investigation, raiding Hauser’s lab and seizing computers.

Gerry Altmann believes the report is significant because it finds that Hauser falsified data—that is, investigators found that Hauser didn’t just make up findings, but actually changed findings to suit his purposes. Altmann is the editor of a journal, Cognition, that published a 2002 paper by Hauser that has since been retracted. When you falsify data, Altmann writes in an e-mail, “you are deliberately reporting as true something that you know is not.

Continued in article

Jensen Comment
To my knowledge cheating by accountics scientists has never once been reported to the public. Perhaps this is partly due to lack of replication and lack of importance of many findings to merit whistle blowing ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Bob Jensen's threads on professors who cheat ---
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

Bob Jensen's Fraud Updates ---
http://www.trinity.edu/rjensen/FraudUpdates.htm


"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
By Bob Jensen
This essay takes off from the following quotation:

A recent accountics science study suggests that audit firm scandal with respect to someone else's audit may be a reason for changing auditors.
"Audit Quality and Auditor Reputation: Evidence from Japan," by Douglas J. Skinner and Suraj Srinivasan, The Accounting Review, September 2012, Vol. 87, No. 5, pp. 1737-1765.

Our conclusions are subject to two caveats. First, we find that clients switched away from ChuoAoyama in large numbers in Spring 2006, just after Japanese regulators announced the two-month suspension and PwC formed Aarata. While we interpret these events as being a clear and undeniable signal of audit-quality problems at ChuoAoyama, we cannot know for sure what drove these switches (emphasis added). It is possible that the suspension caused firms to switch auditors for reasons unrelated to audit quality. Second, our analysis presumes that audit quality is important to Japanese companies. While we believe this to be the case, especially over the past two decades as Japanese capital markets have evolved to be more like their Western counterparts, it is possible that audit quality is, in general, less important in Japan (emphasis added) .

 

 


Major problems in accountics science:

Problem 1 --- Control Over Research Methods Allowed in Doctoral Programs and Leading Academic Accounting Research Journals
Accountics scientists control the leading accounting research journals and only allow archival (data mining), experimental, and analytical research methods into those journals. Their referees shun other methods like case method research, field studies, accounting history studies, commentaries, and criticisms of accountics science.
This is the major theme of Anthony Hopwood, Paul Williams, Bob Sterling, Bob Kaplan, Steve Zeff, Dan Stone, and others ---
http://www.trinity.edu/rjensen/TheoryTAR.htm#Appendix01

Since there are so many other accounting research journals in academe and in the practitioner profession, why single out TAR and the other very "top" journals because they refuse to publish any articles without equations and/or statistical inference tables. Accounting researchers have hundreds of other alternatives for publishing their research.

I'm critical of TAR referees because they're symbolic of today's many problems with the way the accountics scientists have taken over the research arm of accounting higher education. Over the past five decades they've taken over all AACSB doctoral programs with a philosophy that "it's our way or the highway" for students seeking PhD or DBA degrees ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

In the United States, following the Gordon/Howell and Pierson reports in the 1950s, our accounting doctoral programs and leading academic journals bet the farm on the social sciences without taking the due cautions of realizing why the social sciences are called "soft sciences." They're soft because "not everything that can be counted, counts. And not everything that counts can be counted."

Be Careful What You Wish For
Academic accountants wanted to become more respectable on their campuses by creating accountics scientists in literally all North American accounting doctoral programs. Accountics scientists virtually all that our PhD and DBA programs graduated over the ensuing decades and they took on an elitist attitude that it really did not matter if their research became ignored by practitioners and those professors who merely taught accounting.

One of my complaints with accountics scientists is that they appear to be unconcerned that they are not real scientists. In real science the primary concern in validity, especially validation by replication. In accountics science validation and replication are seldom of concern. Real scientists react to their critics. Accountics scientists ignore their critics.

Another complaint is that accountics scientists only take on research that they can model. The ignore the many problems, particularly problems faced by the accountancy profession, that they cannot attack with equations and statistical inference.

And more importantly, accountics scientists rarely leave the campus to collect data.

The bottom line is that accountics research rarely have findings of great interest to either practicing professionals or accounting teachers.
The Cult of Statistical Significance: How Standard Error Costs Us Jobs, Justice, and Lives ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

"Research on Accounting Should Learn From the Past" by Michael H. Granof and Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

The unintended consequence has been that interesting and researchable questions in accounting are essentially being ignored. By confining the major thrust in research to phenomena that can be mathematically modeled or derived from electronic databases, academic accountants have failed to advance the profession in ways that are expected of them and of which they are capable.

Academic research has unquestionably broadened the views of standards setters as to the role of accounting information and how it affects the decisions of individual investors as well as the capital markets. Nevertheless, it has had scant influence on the standards themselves.

Continued in article

 

Problem 2 --- Paranoia Regarding Validity Testing and Commentaries on their Research
This is the major theme of Bob Jensen, Paul Williams, Joni Young and others
574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

 

Problem 3 --- Lack of Concern over Being Ignored by Accountancy Teachers and Practitioners
Accountics scientists only communicate through their research journals that are virtually ignored by most accountancy teachers and practitioners. Thus they are mostly gaming in Plato's Cave and having little impact on the outside world, which is a major criticism raised by then AAA President Judy Rayburn  and Roger Hermanson and others
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm
Also see
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

Some accountics scientists have even warned against doing research for the practicing profession as a "vocational virus."

Joel Demski steers us away from the clinical side of the accountancy profession by saying we should avoid that pesky “vocational virus.” (See below).

The (Random House) dictionary defines "academic" as "pertaining to areas of study that are not primarily vocational or applied , as the humanities or pure mathematics." Clearly, the short answer to the question is no, accounting is not an academic discipline.
Joel Demski, "Is Accounting an Academic Discipline?" Accounting Horizons, June 2007, pp. 153-157

 

Statistically there are a few youngsters who came to academia for the joy of learning, who are yet relatively untainted by the vocational virus. I urge you to nurture your taste for learning, to follow your joy. That is the path of scholarship, and it is the only one with any possibility of turning us back toward the academy.
Joel Demski, "Is Accounting an Academic Discipline? American Accounting Association Plenary Session" August 9, 2006 ---
http://www.trinity.edu/rjensen//theory/00overview/theory01.htm

Too many accountancy doctoral programs have immunized themselves against the “vocational virus.” The problem lies not in requiring doctoral degrees in our leading colleges and universities. The problem is that we’ve been neglecting the clinical needs of our profession. Perhaps the real underlying reason is that our clinical problems are so immense that academic accountants quake in fear of having to make contributions to the clinical side of accountancy as opposed to the clinical side of finance, economics, and psychology.

Bridging the Gap Between Academic Accounting Research and Audit Practice
"Highlights of audit research:  Studies examine auditors' industry specialization, auditor-client negotiations, and executive confidence regarding earnings management,". By Cynthia E. Bolt-Lee and D. Scott Showalter, Journal of Accountancy, August 2012 ---
http://www.journalofaccountancy.com/Issues/2012/Jul/20125104.htm

Added Jensen Comment
This is a nice service of the AICPA in attempting to find accountics science articles most relevant to the practitioner world and to translate (in summary form) these articles for a practitioner readership.

Sadly, the service does not stress that research is of only limited relevance until it is validated in some way at a minimum by encouraging critical commentaries and at a maximum by multiple and independent replications by scientific standards for replications ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

September 7, 2012 message from David Johnstone

Dear all, it always seemed to me that statistics in medicine had a level of earnestness and expert input that you would expect in a field where results cost so much to produce and often hugely matter, both in human welfare and potential income/litigation. Many professional statisticians work in medicine and biology generally, and the journal Biometrika is extremely high standard. There are many cases of applied medical statisticians publishing major pure theory papers in stats theory journals, and also textbooks that become standard references in statistics departments. In econometrics there are a few such people (e.g. Zellner). Some techniques apply really well to an applied field and get developed there rather than in their "home" field. I think discrete choice models were very largely developed in econometrics (and their software was written there too).

As a strategy for empirical researchers in accounting, it seems to me that enlisting help from pure statisticians is a clever way to do new or better work. If you glance at medical journals you often see joint papers written by a medico and a statistician from different departments and buildings on the campus. R.A. Fisher developed much of modern statistical theory because he was an agricultural scientist who needed to design and interpret experiments. Gosset of "Student's t-test" was a brewery researcher, who wanted valid interpretations of his sample observations.

There are suggestions these days that drug companies have got influence over some medical research programs but the basic laws of nature, and the fact that a really bad drug will tend to be be found out in a "natural experiment" once it's on the market, must be in medical researchers minds constantly. Publication in these circumstances is only the start of the story.

September 7,  2012 reply from Bob Jensen

Hi David,

I agree fully with everything you said, although outsourcing accountics science research to non-accounting quants is not likely to happen since there are virtually no research grants from government or industry for accountics science research.

And we must face up to the fact that statistical research in medicine (e.g., in epidemiology and drug testing) is only part of all of medical research. In addition there is a tremendous proportion of implementation research in medicine intended to improve diagnosis (e.g., artificial intelligence and virus discovery in biology and genetics) and treatment (e.g., new surgical techniques and prostheses)

What is lacking in accountics science are the components of diagnosis and treatment of benefit to practicing accountants. This of course was the main point of Harvard's Bob Kaplan in his fabulous 2010 AAA plenary session presentation when he implied that accountics scientists only focused on narrow research akin to epidemiology research in medicine.

In any case, until accountics scientists have access to serious research grant money (including contributions to university overhead), I don't think there will be much accountics scientist research outsourcing to statistical experts.

It is also interesting how anthropology took much of the statistical research out of the academic side of their discipline.

Anthropology Without Science: A new long-range plan for the American Anthropological Association that omits the word “science” from the organization's vision for its future has exposed fissures in the discipline ---
http://www.trinity.edu/rjensen/HigherEdControversies.htm#AntropologyNonScience

I'm not proposing that academic accountants go to the extremes of having accounting research without science. What I am proposing is that we have some alternate tracks in accountancy doctoral programs and leading accounting research journals. This is also what the Pathways Commission is seeking.


Respectfully,
Bob Jensen


June 30, 2012
Hi again Steve and David,


I think most of the problem of relevance of academic accounting research to the accounting profession commenced with the development of the giant commercial databases like CRSP, Compustat, and AuditAnalytics. To a certain extent it hurt sociology research to have giant government databases like the giant census databases. This gave rise to accountics researchers and sociometrics researchers who commenced to treat their campuses like historic castles with moats. The researchers no longer mingled with the outside world due, to a great extent, to a reduced need to collect their own data from the riff raff.



The focus of our best researchers turned toward increasing creativity of mathematical and statistical models and reduced creativity in collecting data. If data for certain variables cannot be found in a commercial database then our accounting professors and doctoral students merely assume away the importance of those variables --- retreating more and more into Plato's Cave.


I think the difference between accountics versus sociometrics researchers, however, is that sociometrics researchers often did not get as far removed from database building as accountics researchers.  They are more inclined to field research. One of my close sociometric scientist friends is Mike Kearl. The reason his Website is one of the most popular Websites in Sociology is Mike's dogged effort to make privately collected databases available to other researchers ---

Mike Kearl's great social theory site
Go to http://www.trinity.edu/rjensen/theory02.htm#Kearl


I cannot find a single accountics researcher counterpart to Mike Kearl.


Meanwhile in accounting research, the gap between accountics researchers in their campus castles and the practicing profession became separated by widening moats.


 

In the first 50 years of the American Accounting Association over half the membership was made up of practitioners, and practitioners took part in committee projects, submitted articles to TAR, and in various instances were genuine scholarly leaders in the AAA. All this changed when accountics researchers evolved who had less and less interest in close interactions with the practitioner world.


 

“An Analysis of the Evolution of Research Contributions by The Accounting Review: 1926-2005,” (with Jean Heck), Accounting Historians Journal, Volume 34, No. 2, December 2007, pp. 109-142.

. . .

Practitioner membership in the AAA faded along with their interest in journals published by the AAA [Bricker and Previts, 1990]. The exodus of practitioners became even more pronounced in the 1990s when leadership in the large accounting firms was changing toward professional managers overseeing global operations. Rayburn [2006, p. 4] notes that practitioner membership is now less than 10 percent of AAA members, and many practitioner members join more for public relations and student recruitment reasons rather than interest in AAA research. Practitioner authorship in TAR plunged to nearly zero over recent decades, as reflected in Figure 2.

 

I think that much good could come from providing serious incentives to accountics researchers to row across the mile-wide moats. Accountics leaders could do much to help. For example, they could commence to communicate in English on the AAA Commons ---
How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

 

Secondly, I think TAR editors and associate editors could do a great deal by giving priority to publishing more applied research in TAR so that accountics researchers might think more about the practicing profession. For example, incentives might be given to accountics researchers to actually collect their own data on the other side of the moat --- much like sociologists and medical researchers get academic achievement rewards for collecting their own data.


 

Put in another way, it would be terrific if accountics researchers got off their butts and ventured out into the professional world on the other side of their moats.


 

Harvard still has some (older) case researchers like Bob Kaplan who  interact extensively on the other side of the Charles River. But Bob complains that journals like TAR discourage rather than encourage such interactions.

Accounting Scholarship that Advances Professional Knowledge and Practice
Robert S. Kaplan
The Accounting Review, March 2011, Volume 86, Issue 2, 


 

Recent accounting scholarship has used statistical analysis on asset prices, financial reports and disclosures, laboratory experiments, and surveys of practice. The research has studied the interface among accounting information, capital markets, standard setters, and financial analysts and how managers make accounting choices. But as accounting scholars have focused on understanding how markets and users process accounting data, they have distanced themselves from the accounting process itself. Accounting scholarship has failed to address important measurement and valuation issues that have arisen in the past 40 years of practice. This gap is illustrated with missed opportunities in risk measurement and management and the estimation of the fair value of complex financial securities. This commentary encourages accounting scholars to devote more resources to obtaining a fundamental understanding of contemporary and future practice and how analytic tools and contemporary advances in accounting and related disciplines can be deployed to improve the professional practice of accounting. ©2010 AAA

 

It's high time that the leaders of accountics scientists make monumental efforts to communicate with the teachers of accounting and the practicing professions. I have enormous optimism regarding our forthcoming fabulous accountics scientist Mary Barth when she becomes President of the AAA.
 

I'm really, really hoping that Mary will commence the bridge building across moats ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

 

 

The American Sociological Association has a journal called the American Sociological Review (ASR) that is to the ASA much of what TAR is to the AAA.


The ASR like TAR publishes mostly statistical studies. But there are some differences that I might note. Firstly, ASR authors are more prone to gathering their own data off campus rather than only dealing with data they can purchase or behavioral experimental data derived from students on campus.


Another thing I've noticed is that the ASR papers are more readable and many have no complicated equations. For example, pick any recent TAR paper at random and then compare it with the write up at
http://www.asanet.org/images/journals/docs/pdf/asr/Aug11ASRFeature.pdf 


Then compare the randomly chosen TAR paper with a randomly chosen ASR paper at
http://www.asanet.org/journals/asr/index.cfm#articles 

 


 

 

Problem 4 --- Ignoring Critics: The Accountics Science Wall of Silence
Leading scholars critical of accountics science included Bob Anthony, Charles Christiensen, Anthony Hopwood, Paul Williams Roger Hermanson, Bob Sterling, Jane Mutchler, Judy Rayburn, Bob Kaplan, Steve Zeff, Joni Young, Bob Sterling, Dan Stone, Bob Jensen, and many others. The most frustrating thing for these critics is that accountics scientists are content with being the highest paid faculty on their campuses and their monopoly control of accounting PhD programs (limiting outputs of graduates) to a point where they literally ignore they critics and rarely, if ever, respond to criticisms.
See http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm  

Problem 5 ---
The Cult of Statistical Significance:  How Standard Error Costs Us Jobs, Justice, and Lives, by Stephen T. Ziliak and Deirdre N. McCloskey (Ann Arbor:  University of Michigan Press, ISBN-13: 978-472-05007-9, 2007)
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Page 206
Like scientists today in medical and economic and other sizeless sciences, Pearson mistook a large sample size for the definite, substantive significance---evidence s Hayek put it, of "wholes." But it was as Hayek said "just an illusion." Pearson's columns of sparkling asterisks, though quantitative in appearance and as appealing a is the simple truth of the sky, signified nothing.

pp. 250-251
The textbooks are wrong. The teaching is wrong. The seminar you just attended is wrong. The most prestigious journal in your scientific field is wrong.

You are searching, we know, for ways to avoid being wrong. Science, as Jeffreys said, is mainly a series of approximations to discovering the sources of error. Science is a systematic way of reducing wrongs or can be. Perhaps you feel frustrated by the random epistemology of the mainstream and don't know what to do. Perhaps you've been sedated by significance and lulled into silence. Perhaps you sense that the power of a Roghamsted test against a plausible Dublin alternative is statistically speaking low but you feel oppressed by the instrumental variable one should dare not to wield. Perhaps you feel frazzled by what Morris Altman (2004) called the "social psychology rhetoric of fear," the deeply embedded path dependency that keeps the abuse of significance in circulation. You want to come out of it. But perhaps you are cowed by the prestige of Fisherian dogma. Or, worse thought, perhaps you are cynically willing to be corrupted if it will keep a nice job

 

In Accountics Science R2 = 0.0004 = (-.02)(-.02) Can Be Deemed a Statistically Significant Linear Relationship ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

 

Main Purpose
The main purpose of this Document is to appeal to accountics scientists to make greater efforts to communicate with accounting teachers, students, and practitioners. In 2008 the American Accounting Association created the AAA Commons that is becoming more popular with accounting teachers but is literally ignored by accountics scientists.


A Message from the 2013-2014President of the American Accounting Association
Accounting Education News
Mary Barth, Stanford University
Fall 2013. Vol. 41, Issue 4
Pages 2-3
http://aaahq.org/pubs/AEN/2013/AEN_Fall13_WEB.pdf

I would like to give you an idea of what to expect this year from your AAA. Remember, the AAA does not reinvent itself with each new president. We each serve for three years—as president-elect, president, and past president—to help ensure continuity. Joining the AAA leadership team is like jumping on a moving train. The objective is to move the train forward while enhancing the experience of all on board and getting to a wonderful destination, not making big changes in direction.

As with any organization, there are the usual things we must do to keep things running smoothly, such as publishing our 14 journals and helping our 16 sections and seven regions to thrive. In addition, there are many ongoing activities focused on our common interests , such as preserving our intellectual property, implementing the Pathways Commission’s recommendations, ensuring we have sound publication ethics, helping regions to reinvigorate their meetings, and planning our 2015–2016 Centennial celebration. W e will continue these activities so that we can reap the benefits identified when they were begun. We will add some new activities to reinforce the prior ones and enable us to meet whatever challenges our future holds.

Big—potentially disruptive—changes are looming for accounting education, accounting scholarship, and our role in the world’s economy . The plenary and follow-up sessions at the 2013 Annual Meeting in Anaheim focused on these changes in higher education, research, and teaching and learning and clearly identified the challenges we face. Many of us feel threatened by these disruptive changes. But the changes are exhilarating and have great promise for making our jobs more efficient, more meaningful, and more fun. They will enable each of us to focus on our high value–added activities and to spend less time reinventing the tasks many of us do. With you, we will work to turn these challenges into opportunities.

These challenges command the time and attention of our leadership group . This past year, the AAA Council , Board of Directors, and many of our other colleagues spent considerable time on sharpening the vision in our strategic plan to determine what we can and should do to meet these challenges. The Sharpening Our Vision discussions generated many creative ideas . This year, we will determine which ideas we should implement and how soon we should implement them. As your president, I will help ensure the AAA helps you navigate the many changes on the horizon—regardless of what the future brings .

When I look closer at the changes on the horizon, I see globalization lurking in the background. There is no denying we live in a global world. Globalization is part of today’s reality. The Internet and other technologies, as well as routine long-distance travel, are constantly shrinking the globe.

Continued in article

Jensen Comment
In her first message to the AAA membership President broke from the precedent of Presidents posting this message to the AAA Commons --- which all former presidents have done since the inauguration of the AAA Commons.

She mentions many of the resources of the AAA for its membership including 14 journals. But she makes zero mention of either the AAA's resources of the AAA Commons or the AECM. She devotes most of her message to globalization initiatives of the AAA without once mentioning the role the AAA Commons and the AECM are already playing in globalization of accounting education.

I think I understand President Barth's low (zero) prioity for the AAA Commons. She's one of our top accountics scientists. Like virtually all accountics scientists she has made no attempt to communicate with accounting educators and practioners around the globe via the AAA Commons. To date she, like almost every other accountics scientists, has made zero Posts and zero Comments on the Commons ---
See Below!

She ignored by request, without even a courtesy reply, to support my initiative to form a Tech Corner on the Commons where AAA journal editors could request that accountics science author should send a message to the commons explaining why their forthcoming articles are relevant to accounting teachers and practitioners who follow the Commons.

Alas I fear promotion of the AAA Commons will have to wait for a non-accountics scientist to promote the AAA Commons among the AAA membership. Fortunately, the next President of the AAA will probably be that leader. Christine Botoson as of November 14, 2013 has made 88 Posts, 15 Comments on the AAA Commons. I suspect we will have to wait until she becomes President in August 2014 for the leadership of the AAA to once again promote the AAA Commons for networking of education, research, and practice messages on the AAA Commons.

I don't think there's any way to motivate accountics scientists to communicate with the "common" academics on the AAA Commons.

 

 

For the remaining years of my life I intend to do all I can to motivate accountics scientists, accounting teachers, students, and practitioners to communicate with each other more and more and more on the AAA Commons.

The American Accounting Association (AAA) Commons was formed in 2008 as "The Gathering Place for Accounting" ---
http://commons.aaahq.org/pages/home

Since AAACommons was launched at CTLA 2008, the site has evolved to include teaching materials, home pages for sections and regions, research links, conversations from professors, and input from partner firms. This session will introduce new users to the AAACommons, demonstrate how to share teaching materials, and obtain feedback so that we can evolve into the online community you need. We are interested in learning what your institution values as evidence of your contributions. We need your feedback so that we can continue to refine the teaching areas within AAACommons and evolve into the gathering place for accounting that you will use. Please join us to see how AAACommons can be used to share ideas about teaching, research, and service.

What Julie did not mention is what I call the lack of interest of accountics scientists in having their research discussed on the Commons!

Hi Jagdish,

I don't think there's much of a "market test" for accountics research once it's published in TAR, JAR, JAE, CAR, or BAR.


The test ends with the authors getting a hit in TAR, JAR, JAE, CAR, or BAR. There are no subsequent commentaries on the published papers. Only rarely is there replication years later when researchers elect to extend the research in some new ways.


Consider the AECM. Has anybody other than me posted notice of an interesting TAR, JAR, or JAE accountics science paper on the AECM? Steve Kachelmeier mentions some references now and then but only after I've ignited dynamite under his behind.


Sadly accountics researchers themselves don't comment much on each others' works on the AAA Commons. This often makes me wonder if they're even interested in having public discussions on their research. For example as of April 16, 2012, the following leading accountics scientists had zero Posts and zero Comments on the AAA Commons (some are even former AAA presidents):

Zero Posts and Zero Comments on the AAA Commons
Mary Barth
Katherine Schipper
Bill Beaver
Joel Demski
Ray Ball
Ross Watts
Jerry Zimmerman
Mark Nelson
Ed Swanson
Tom Omer
Dan Collins
Stephen Penman
Dan Dhaliwal
Jim Ohlson
John Core
Richard A Lambert
Paul Newman
Larry Brown
John Hand
Wayne Landsman
Robert N. Freeman
Maureen McNichols
And the list goes on among our leading accountics scientists

 

Steve Kachelmeier had one post and zero Comments on the AAA Commons
Bill Kinney had two Posts and zero Comments
Paul Healy had two Posts and zero Comments
Jere Francis has one post and zero Comments
Bob Libby had zero Posts, three Comments
Mark Bradshaw had zero Posts and one Comments
Linda Bamber had one post and zero Comments
Amy Dunbar had one Post and two Comments
Greg Waymire has six Posts and seven Comments
Richard Sansing had 26 Posts and 18 Comments
Dan Stone (who Posts his childhood picture) had 14 Posts, 9 Comments

 

At the dawn of 2013 Bob Jensen has nearly 1,500 Posts and 15,00 Comments on the AAA Commons
He's really trying to encourage commentaries on accounting research, accounting practice, and accounting teaching on the AAA Commons
Most of the AAA's leading accountics scientists show no interest in placing commentaries on accountics research on the AAA Commons or the AECM. Bob Jensen most certainly does not want to make the AAA Commons his blog.

I certainly wish more of the above people became active commenting (especially critically) about accountics science on the AECM and the AAA Commons so I could finally get a break.

Respectfully,
Bob Jensen


The AAA Commons, David Boynton, and Proposal for the AAA Leadership to Form a Tech Corner Forum on the AAA Commons

I am forwarding this AECM message to the current AAA Leadership, including Karen Pincus, Mary Barth, and Julie Smith David. For a very long time, the AAA has not been a good old boys club.

The contributions of accountics scientists to the AAA Commons to date have been almost nothing ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

Bob Jensen has contributed around 100 accountics science postings, but these are only a small proportion of his 1.500 posts and 15,000 comments on the Commons ---
http://commons.aaahq.org/people/12462cc690/profile

 

David Boynton
There are quite a few accountics science postings on the AAA Commons thanks to David Boynton. David is on the staff of the AAA. As of January 19, 2013 David has made 470 posts to the Commons. Nearly all of them are accountics science postings.

To see a listing of David's postings on the AAA Commons, do the following:

  1. Go to the AAA Commons at http://commons.aaahq.org/pages/home
     
  2. Sign in as an AAA Member.
    I truly wish the full Commons was available to non-members, but if wishes were horses beggars would ride.
     
  3. On the right side you will see a picture link to David Boyton. Click on this link.
     
  4. Near the top of David's profile you will see a link to his Posts. Click there to see a listing of his postings to the Commons.

 

Proposal for a Quant Corner
I propose that the current leadership of the AAA post a Quant Corner Forum on the Commons. The purpose would be to have accountics scientists post a discussion of their existing working papers (e.g., on SSRN) and forthcoming papers in TAR, JAR, JAE, and other quant journals.  A restriction would be that these authors discuss their research without the use of equations and statistical inference tables at a level that non-quants can understand.

Commons users could then comment on selected Quant Corner postings. Ideally the authors would then reply back in a dialog that is not being accomplished in the accountics science journals themselves. For example, TAR has not published commentaries in years.

The model for the Quant Corner Forum could be the FASB's FASRI blog --- http://www.fasri.net/ 
Note in particular how the accountics scientists discuss their research in plain English beyond a mere abstract.

The problem with the FASRI blog is that it's limited to research related to accounting standard setting.

I envision the Quant Corner to expand to all research topics of accountics scientists.

Below is a quotation from one of my January 18 messages from another thread on the AECM"

Hi Richard (Sansing),


Perhaps the secret lies in the race between the Turtle and the Hare.


Accountics scientists don't have to become like Bob Jensen thousands of postings and tens of thousands of comments on the Commons. But they could become steady in terms of posts and comments much like you are (gratefully to me) a steady commenter on the AECM.


It would be terrific if Mary Barth posted a an Accountics Science Forum (much like Zane's Writing Forum) on the AAA Commons. Then authors could post notices of their forthcoming TAR, JAR, JAE, and other publications as well as postings to SSRN. This might encourage AAA members to then comment on these forthcoming publications. In a way this offsets the lack of published commentaries in TAR, JAR, and JAE.


I'm certain that it will have a different name than Accountics Science Forum. But it could be called something like Quant Corner. The FASB has a blog to serve as a model, but accountics science postings to that blog are much too infrequent ---
http://www.fasri.net/


In other words the the Quant Corner on the Commons could be modeled after the FASRI blog, but the accountics science journal editors and referees should remind authors to make postings to the Quant Corner.


Thank you Richard for being tolerant of my rantings on the AECM.


Respectfully,
Bob Jensen

On January 18, 2013 Richard Campbell replies as follows:

Why not have the AAA have an online comment section for each of the AAA journals?

The Wall Street Journal has that for all their Blogs.

January 19, 2013 reply from Bob Jensen

Thanks for replying Richard.  I've actually thought about that, but I prefer the FASRI-style lead ins where authors provide more of a personal chat about their forthcoming research articles. These chats are more than the abstracts that now appear on articles. And the dialog should avoid the equations and statistical inference tables.

There could be a suggested outline for author lead in chats. I like the format that's extremely common on Wikipedia where there are sections like you see at http://en.wikipedia.org/wiki/Balanced_scorecard

Yes we could even request that authors fill in a Criticisms section for their own research article.

What's interesting is that readers like me would be drawn to the Quant Corner Forum in large measure just to see how accountics scientists criticize their own research.

Respectfully,
Bob Jensen

 


Authors of The Accounting Review

To further highlight my point, I examined to contributions of recent The Accounting Review (TAR) authors to the AAA Commons.
I apologize for the formatting to date. It looks different in FrontPage than it does in a Web browser. I will work on improving this.

AAA

Commons

            Author's History of Posts and Comments to the AAA Commons Since the Commons Began in 2008  
Commons Commons Title and Authors in the March 2012 Edition of The Accounting Review  
Posts
0
Comments
0
Customer-Base Concentration: Implications for Firm Performance and Capital Markets
Panos N. Patatoukas
Abstract | Full Text | PDF (320 KB) 
full access
Posts
1
0
 
Comments
1
0
Gray Markets and Multinational Transfer Pricing
Romana L. Autrey,
Francesco Bova
Abstract | Full Text | PDF (476 KB) 
full access
Posts
0
0
0
Comments
0
0

0
Asset Securitizations and Credit Risk
Mary E. Barth
Gaizka Ormazabal
Daniel J. Taylor
Abstract | Full Text | PDF (360 KB) 
full access
Posts
0
0
0
0
Comments
0
0

0
0
Direct and Mediated Associations among Earnings Quality, Information Asymmetry, and the Cost of Equity
Nilabhra Bhattacharya
Frank Ecker
Per M. Olsson
Katherine Schipper
Abstract | Full Text | PDF (531 KB) 
full access
Posts
2
0
Comments
0
0

 
Assessing the Impact of Alternative Fair Value Measures on the Efficiency of Project Selection and Continuation
Judson Caskey
John S. Hughes
Abstract | Full Text | PDF (4784 KB) 
full access
Posts
1
0
0
Comments
1
0

0
Using Online Video to Announce a Restatement: Influences on Investment Decisions and the Mediating Role of Trust
W. Brooke Elliott
Frank D. Hodge
Lisa M. Sedor
Abstract | Full Text | PDF (509 KB) 
full access
Posts
0
0
Comments
0
0

 
The Role of Stock Liquidity in Executive Compensation
Sudarshan Jayaraman
Todd T. Milbourn
Abstract | Full Text | PDF (396 KB) 
full access
Posts
0
0
Comments
1
0

 
Can Reporting Norms Create a Safe Harbor? Jury Verdicts against Auditors under Precise and Imprecise Accounting Standards
Kathryn Kadous
Molly Mercer
Abstract | Full Text | PDF (385 KB) 
full access
Posts
0
1
0
Comments
0
0

0
Selection Models in Accounting Research
Clive S. Lennox
Jere R. Francis
Zitian Wang
Abstract | Full Text | PDF (499 KB) 
full access
Posts
0
0
2
Comments
0
0
2
In Search of Informed Discretion: An Experimental Investigation of Fairness and Trust Reciprocity
Victor S. Maas
Marcel van Rinsum
Kristy L. Towry
Abstract | Full Text | PDF (548 KB) 
full access
Posts
0
0
0
Comments
4
0

0
The Impact of Religion on Financial Reporting Irregularities
Sean T. McGuire
Thomas C. Omer
Nathan Y. Sharp
Abstract | Full Text | PDF (434 KB) 
full access
Posts
0
Comments
0

 
Evidence on the Trade-Off between Real Activities Manipulation and Accrual-Based Earnings Management
Amy Y. Zang
Abstract | Full Text | PDF (452 KB) 


AAA

Commons Author's History of Posts and Comments to the AAA Commons Since the Commons Began in 200  
Posts Comments Title and Authors in the January 2012 Edition of The Accounting Review  
Posts
0
Comments
0
Incentives to Inflate Reported Cash from Operations Using Classification and Timing
Lian Fen Lee
Abstract | Full Text | PDF (459 KB) 
full access
Posts
0
0

Comments
 0
 0

 

Investor Competition over Information and the Pricing of Information Asymmetry
Brian K. Akins
Jeffrey Ng
Rodrigo S. Verdi
Abstract | Full Text | PDF (408 KB) 
full access
Posts
0
1
0
Comments
0
0
0
A Convenient Scapegoat: Fair Value Accounting by Commercial Banks during the Financial Crisis
Brad A. Badertscher
Jeffrey J. Burks
Peter D. Easton
Abstract | Full Text | PDF (1225 KB) 
full access
Posts
0
0
1
Comments
0
0
2
Tax Avoidance, Large Positive Temporary Book-Tax Differences, and Earnings Persistence
Bradley Blaylock
Terry Shevlin
Ryan J. Wilson
Abstract | Full Text | PDF (423 KB) 
full access
Posts
0
Comments
4

 
A Fundamental-Analysis-Based Test for Speculative Prices
Asher Curtis
Abstract | Full Text | PDF (470 KB) 
full access
Posts
0
0
0
Comments
0
0

0
Shareholder Voting on Auditor Selection, Audit Fees, and Audit Quality
Mai Dao
K. Raghunandan
Dasaratha V. Rama
Abstract | Full Text | PDF (407 KB) 
full access
Posts
2
0
0
Comments
1
0

0
Accounting for Lease Renewal Options: The Informational Effects of Unit of Account Choices
Jeffrey W. Hales
hankar Venkataraman
T. Jeffrey Wilks
Abstract | Full Text | PDF (516 KB) 
full access
Posts
0
0
Comments
0
0

 
Does Enhanced Disclosure Really Reduce Agency Costs? Evidence from the Diversion of Corporate Resources
Pinghsun Huang
Yan Zhang
Abstract | Full Text | PDF (454 KB) 
full access
Posts
0
0
0
0
Comments
0
0

0
0
Compensation Committees' Treatment of Earnings Components in CEOs' Terminal Years
Mark R. Huson
Yao Tian
Christine I. Wiedman
Heather A. Wier
Abstract | Full Text | PDF (554 KB) 
full access
Posts
0
0
Comments
0
0

 
Accounting Decentralization and Performance Evaluation of Business Unit Managers
Raffi J. Indjejikian
Michal Matĕjka
Abstract | Full Text | PDF (480 KB) 
full access
Posts
0*
0
Comments
0*
0

 
Evaluating the Strength of Evidence: How Experience Affects the Use of Analogical Reasoning and Configural Information Processing in Tax
Anne M. Magro
Sarah E. Nutter
Abstract | Full Text | PDF (559 KB) 
full access
Posts
0
Comments
0

 
Disclosures of Insider Purchases and the Valuation Implications of Past Earnings Signals
David Veenman
Abstract | Full Text | PDF (305 KB) 

*Anne Magro's Profile page lists 0 Posts and 0 Comments whereas at another source she's listed as having 12 Posts and 4 Comments. I think there is a data error for her. I could not find any of her Posts and Comments.



 

Jensen Comment
This is a work in progress. When I find time I will add more author panels from TAR and other AAA journals.

My priors are, however, that AAA journal authors to date are not making an effort on the AAA Commons to explain their research efforts to accounting teachers, researchers, and practitioners who visit the Commons.

Perhaps a hive could be automatically opened up for each paper published in an AAA journal. Then the author(s) could be signaled whenever somebody makes a comment in the hive. The Commons already notifies me by email whenever somebody makes a comment on one of my postings or to a posting where I've added a comment. Wouldn't it be great if this was extended to research and teaching papers published in AAA journals?

One more mission in what's left of my life will be to try to change this.

574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm




Hi Zane,

I, along with others, have been trying to make TAR and other AAA journals more responsible about publishing the commentaries on previously published resear4ch papers, including commentaries on successful or failed replication efforts.


TAR is particularly troublesome in this regard. Former TAR Senior Editor Steve Kachelmeier insists that the problem does not lie with TAR editors. Literally every submitted commentary, including short reports of replication efforts, has been rejected by TAR referees for decades.


So I looked into how other research journals met their responsibilities for publishing these commentaries. They do it in a variety of ways, but my preferred model is the Dialogue section of The Academy of Management Journal (AMJ) --- in part because the AMJ has been somewhat successful in engaging practitioner commentaries. I wrote the following at


The Dialogue section of the AMJ invites reader comments challenging validity of assumptions in theory and, where applicable, the assumptions of an analytics paper. The AMJ takes a slightly different tack for challenging validity in what is called an “Editors’ Forum,” examples of which are listed in the index at
http://journals.aomonline.org/amj/amj_index_2007.pdf
 


 

One index had some academic vs. practice Editors' Forum articles that especially caught my eye as it might be extrapolated to the schism between academic accounting research versus practitioner needs for applied research:

Bartunek, Jean M. Editors’ forum (AMJ turns 50! Looking back and looking ahead)—Academic-practitioner collaboration need not require joint or relevant research: Toward a relational

Cohen, Debra J. Editors’ forum (Research-practice gap in human resource management)—The very separate worlds of academic and practitioner publications in human resource management: Reasons for the divide and concrete solutions for bridging the gap. 50(5): 1013–10

Guest, David E. Editors’ forum (Research-practice gap in human resource management)—Don’t shoot the messenger: A wake-up call for academics. 50(5): 1020–1026.

Hambrick, Donald C. Editors’ forum (AMJ turns 50! Looking back and looking ahead)—The field of management’s devotion to theory: Too much of a good thing? 50(6): 1346–1352.

Latham, Gary P. Editors’ forum (Research-practice gap in human resource management)—A speculative perspective on the transfer of behavioral science findings to the workplace: “The times they are a-changin’.” 50(5): 1027–1032.

Lawler, Edward E, III. Editors’ forum (Research-practice gap in human resource management)—Why HR practices are not evidence-based. 50(5): 1033–1036.

Markides, Costas. Editors’ forum (Research with relevance to practice)—In search of ambidextrous professors. 50(4): 762–768.

McGahan, Anita M. Editors’ forum (Research with relevance to practice)—Academic research that matters to managers: On zebras, dogs, lemmings,

Rousseau, Denise M. Editors’ forum (Research-practice gap in human resource management)—A sticky, leveraging, and scalable strategy for high-quality connections between organizational practice and science. 50(5): 1037–1042.

Rynes, Sara L. Editors’ forum (Research with relevance to practice)—Editor’s foreword—Carrying Sumantra Ghoshal’s torch: Creating more positive, relevant, and ecologically valid research. 50(4): 745–747.

Rynes, Sara L. Editors’ forum (Research-practice gap in human resource management)—Editor’s afterword— Let’s create a tipping point: What academics and practitioners can do, alone and together. 50(5): 1046–1054.

Rynes, Sara L., Tamara L. Giluk, and Kenneth G. Brown. Editors’ forum (Research-practice gap in human resource management)—The very separate worlds of academic and practitioner periodicals in human resource management: Implications

More at http://journals.aomonline.org/amj/amj_index_2007.pdf

Also see the index sites for earlier years --- http://journals.aomonline.org/amj/article_index.htm


My appeal for an AMJ model as a way to meet TAR responsibilities for reporting replications and commentaries  fell on deaf ears in the AECM.


So now I'm working on another tack The AAA Commons now publishes TAR tables of contents. But the accountics science authors have never made an effort to explain their research on the Commons. And members of the AAA have never taken an initiative to comment on those articles or to report successful or failed replication efforts.


I think the problem is that a spark has to ignite both the TAR authors and the AAA membership to commence dialogs on TAR articles as well as articles published by other AAA journals.


To this extent I have the start of a working paper on these issues at
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 


My purpose in starting the above very unfinished working paper is two fold.


Firstly, it is to show how the very best of the AAA's accountics scientists up to now just don't give a damn about supporting the AAA Commons. My mission for the rest my life will be to change this.


Secondly, it is to show that the AAA membership has shown no genuine interest to discuss research published in the AAA journals. My mission in life for the rest of my life will be to change this. Julie Smith David, bless her heart, is now working at my behest to provide me with data regarding who has been the most supportive of the AAA Commons over since it was formed in 2008. From this I hope to learn more about what active contributors truly want from their Commons. To date my own efforts have simply been to add honey-soaked tidbits to help attract the publish to the AAA Commons. I most certainly like more active contributors to relieve me of this chore in my life.


My impossible dream is to draw accounting teachers, students, and practitioners into public hives of discussion of AAA journal research.


Maybe I'm just a dreamer. But at least I'm still  trying after every other initiative I've attempted to draw accountics researchers onto the Commons has failed. I know we have some accountics scientist lurkers on the AECM, but aside from Steve Kachelmeier they do not submit posts regarding their work in progress or their published works.


Thank you Steve for providing value added in your AECM debates with me and some others like Paul Williams even if that debate did boil over.


Respectfully,
Bob Jensen

On May 2, 2012 former TAR Senior Editor Steve Katchelmeier wrote the following:

Is it your position Bob that no "accountics" research is of interest to practitioners? I hope that's an overly extreme characterization, but it seems to be implied in your latest post in this thread. I for one see no inconsistency at all between research of the style you call accountics and research of relevance to practice.

Steve

May 3, 2012 reply from Bob Jensen

Hi Steve,

Of course there are instances when accountics science findings impact practice and/or teaching. My most obvious example is when accountics scientist Mary Barth was on the IASB. Undoubtedly issues under deliberation impacted on her research, and on occasion accountics science research findings had a bearing on her and other members of the board with whom she consulted. Mary has more practice experience than most accountics scientists. She was an Andersen partner before entering Stanford's PhD program. She was on the IASB from the beginning until 2009.

On rare occasions accountics scientists conduct behavioral experiments on real-world accountants rather than on-campus students. An example is the Hunton and Gold study reported in TAR in May 2010. Subjects in the experiment came from a large national accounting firm that purportedly continued to use the brainstorming technique under study.

AIS professors have had some impact on practice, most notably Bill McCarthy's seminal REA (Resource-Event-Agent) accounting system.

But what I think we need to do is look what accountics science critics have been saying for decades and
what accountics scientists have been ignoring with a wall of silence.

Even accountics scientists admit that their research methods are limited mostly to things they can model without leaving campus. They do not pretend all things can be modeled. The point out that in a free world other researchers are free to use other methods and publish in a wide array of journals.

But there's one huge problem that evolved with accountics science since the 1960s
In their zeal to bring science into academic accounting research they literally took over all the doctoral programs in North America for the past five decades. Our doctoral graduates developed mathematical and econometric skills --- that's a good thing. But at the same time our doctoral programs for more than five decades have been generating graduates who are not skilled in other methods of research such as history research, case study research, philosophy research, etc.
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

As Bob Kaplan, Steve Zeff, and many other critics point out that accountics scientists tend to only focus on problems that can be approached with mathematics and statistics. Accounting teachers and practitioners and even accountics scientists saw great problems in avoiding a much broader set of problems in the education and the profession.

In an effort to motivate accounting faculty to conduct research using methods other than accountics science, many new journals were formed. The journals had high hopes that in a publish or parish environment accounting researchers would branch out from accountics science using other methods and generate non-accountics findings to a much broader set of problems.

This would have been a good thing except for one thing --- our doctoral program graduates over the past five decades really preferred accountics science. In part this is because much of accountics scientists rarely have to leave the campus.

Also most of our doctoral program graduates since the 1960s prefer accountics science methods because that's all they learned.

And so we have the following events that happened in academic accounting research:

  1. New journals like Accounting Horizons, Issues in Accounting Education, and a bunch of others were formed to motivate our doctoral program graduates to tackle problems that are intractable in accountics science.

    Mautz, R. K. 1987. Editorial. Accounting Horizons (September): 109-111.

    Mautz, R. K. 1987. Editorial: Expectations: Reasonable or ridiculous? Accounting Horizons (December): 117-120.


     

  2. But instead of paying attention to the missions of these new journals like Accounting Horizons, accountics scientists hijacked those journals turning them into TAR clones for accountics science publications. Accounting teaching teachers and practitioners ended up ignoring Accounting Horizons as much as they ignored TAR. ---
    http://www.cs.trinity.edu/~rjensen/temp/ZeffCommentOnAccountingHorizons.ppt  

 

"Research on Accounting Should Learn From the Past" by Michael H. Granof and Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

The unintended consequence has been that interesting and researchable questions in accounting are essentially being ignored. By confining the major thrust in research to phenomena that can be mathematically modeled or derived from electronic databases, academic accountants have failed to advance the profession in ways that are expected of them and of which they are capable.

Academic research has unquestionably broadened the views of standards setters as to the role of accounting information and how it affects the decisions of individual investors as well as the capital markets. Nevertheless, it has had scant influence on the standards themselves.

Continued in article

 

This of course begs the question that if accountics scientists finally listen to their many critics what should they do to "advance the profession in ways that are expected of them and of which they are capable?"

  1. I don't think anybody has an answer to this question and would not put much faith in quickie surveys of accounting firms and corporations.
    The first thing accountics scientists need to do is get off campus and make extensive contacts inside accounting firms and business firms of all sizes. Interactively and over time they would, thereby, identify profession problems that need to be researched that would advance the profession.
     
  2. Accountics scientists have done little to improve the IASB and FASB conceptual frameworks. For example, earnings-per-share (eps) is probably the most important performance index tracked by investors and analysts. To date the conceptual framework for earnings and eps is lousy. I think accountics scientists could bring innovative research to bear on conceptual frameworks for earnings and most other items in the conceptual framework.
     
  3. To date accountics scientists mostly ignore externalities (non-convexities) that are very difficult to bring into their empirical and analytical models. Perhaps if they get beyond their models and purchased databases they can bring other research methods to bear upon these externalities.
     
  4. Most importantly, in my opinion, accountics scientists need to start communicating with teachers and practitioners. For example, their record to date in messaging with accounting teachers and practitioners on the AAA Commons is a big, big, big ZERO!

Accountics Scientists Seeking Truth: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
 

 

Summary of my reply to Steve

The problem is not that accountics scientists own TAR and will not allow any articles to be published that do not contain equations and/or statistical inference tables. There are numerous other outlets publishing a wider variety of accounting research.

One enormous problem is that, in their zeal to bring science into academic accounting research, accountics scientists literally took over all the doctoral programs in North America for the past five decades. Our doctoral graduates developed mathematical and econometric skills --- that's a good thing. But at the same time our doctoral programs for more than five decades have been generating graduates who are not skilled in other methods of research such as history research, case study research, philosophy research, etc.
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

But an even bigger problem is that accountics science is an easier path to research. Most accountics scientists conduct research in Plato's Cave without having to  leave the campus. Over more than five decades accountics scientists have grown soft and avoid the hard problems in research, such as the problem of providing creative and innovative findings for the accountancy profession.

 

How did academic accounting research become a pseudo science?
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

 




 

CONCLUSION from
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm

In the first 40 years of TAR, an accounting “scholar” was first and foremost an expert on accounting. After 1960, following the Gordon and Howell Report, the perception of what it took to be a “scholar” changed to quantitative modeling. It became advantageous for an “accounting” researcher to have a degree in mathematics, management science, mathematical economics, psychometrics, or econometrics. Being a mere accountant no longer was sufficient credentials to be deemed a scholarly researcher. Many doctoral programs stripped much of the accounting content out of the curriculum and sent students to mathematics and social science departments for courses. Scholarship on accounting standards became too much of a time diversion for faculty who were “leading scholars.” Particularly relevant in this regard is Dennis Beresford’s address to the AAA membership at the 2005 Annual AAA Meetings in San Francisco:

In my eight years in teaching I’ve concluded that way too many of us don’t stay relatively up to date on professional issues. Most of us have some experience as an auditor, corporate accountant, or in some similar type of work. That’s great, but things change quickly these days.
Beresford [2005]

 

Jane Mutchler made a similar appeal for accounting professors to become more involved in the accounting profession when she was President of the AAA [Mutchler, 2004, p. 3].

In the last 40 years, TAR’s publication preferences shifted toward problems amenable to scientific research, with esoteric models requiring accountics skills in place of accounting expertise. When Professor Beresford attempted to publish his remarks, an Accounting Horizons referee’s report to him contained the following revealing reply about “leading scholars” in accounting research:

1. The paper provides specific recommendations for things that accounting academics should be doing to make the accounting profession better. However (unless the author believes that academics' time is a free good) this would presumably take academics' time away from what they are currently doing. While following the author's advice might make the accounting profession better, what is being made worse? In other words, suppose I stop reading current academic research and start reading news about current developments in accounting standards. Who is made better off and who is made worse off by this reallocation of my time? Presumably my students are marginally better off, because I can tell them some new stuff in class about current accounting standards, and this might possibly have some limited benefit on their careers. But haven't I made my colleagues in my department worse off if they depend on me for research advice, and haven't I made my university worse off if its academic reputation suffers because I'm no longer considered a leading scholar? Why does making the accounting profession better take precedence over everything else an academic does with their time?
As quoted in Jensen [2006a]

 

The above quotation illustrates the consequences of editorial policies of TAR and several other leading accounting research journals. To be considered a “leading scholar” in accountancy, one’s research must employ mathematically-based economic/behavioral theory and quantitative modeling. Most TAR articles published in the past two decades support this contention. But according to AAA President Judy Rayburn and other recent AAA presidents, this scientific focus may not be in the best interests of accountancy academicians or the accountancy profession.

In terms of citations, TAR fails on two accounts. Citation rates are low in practitioner journals because the scientific paradigm is too narrow, thereby discouraging researchers from focusing on problems of great interest to practitioners that seemingly just do not fit the scientific paradigm due to lack of quality data, too many missing variables, and suspected non-stationarities. TAR editors are loath to open TAR up to non-scientific methods so that really interesting accounting problems are neglected in TAR. Those non-scientific methods include case method studies, traditional historical method investigations, and normative deductions.

In the other account, TAR citation rates are low in academic journals outside accounting because the methods and techniques being used (like CAPM and options pricing models) were discovered elsewhere and accounting researchers are not sought out for discoveries of scientific methods and models. The intersection of models and topics that do appear in TAR seemingly are borrowed models and uninteresting topics outside the academic discipline of accounting.

We close with a quotation from Scott McLemee demonstrating that what happened among accountancy academics over the past four decades is not unlike what happened in other academic disciplines that developed “internal dynamics of esoteric disciplines,” communicating among themselves in loops detached from their underlying professions. McLemee’s [2006] article stems from Bender [1993].

 “Knowledge and competence increasingly developed out of the internal dynamics of esoteric disciplines rather than within the context of shared perceptions of public needs,” writes Bender. “This is not to say that professionalized disciplines or the modern service professions that imitated them became socially irresponsible. But their contributions to society began to flow from their own self-definitions rather than from a reciprocal engagement with general public discourse.”

 

Now, there is a definite note of sadness in Bender’s narrative – as there always tends to be in accounts of the shift from Gemeinschaft to Gesellschaft. Yet it is also clear that the transformation from civic to disciplinary professionalism was necessary.

 

“The new disciplines offered relatively precise subject matter and procedures,” Bender concedes, “at a time when both were greatly confused. The new professionalism also promised guarantees of competence — certification — in an era when criteria of intellectual authority were vague and professional performance was unreliable.”

But in the epilogue to Intellect and Public Life, Bender suggests that the process eventually went too far. “The risk now is precisely the opposite,” he writes. “Academe is threatened by the twin dangers of fossilization and scholasticism (of three types: tedium, high tech, and radical chic). The agenda for the next decade, at least as I see it, ought to be the opening up of the disciplines, the ventilating of professional communities that have come to share too much and that have become too self-referential.”

For the good of the AAA membership and the profession of accountancy in general, one hopes that the changes in publication and editorial policies at TAR proposed by President Rayburn [2005, p. 4] will result in the “opening up” of topics and research methods produced by “leading scholars.”




Illustration 1

(Taxpayers) "are willing to accept a larger share of the burden required to reform the Social Security system as their concern about the future sustainability of the Social Security worsens." However, "this willingness to accept a larger share of the burden does not begin until participants' concerns reach a very high level...Prior to reaching that very high level of concern, the data...indicate there is no change in (taxpayer) willingness to accept a larger share of the burden."
"The Effect of Accounting Information on Taxpayers' Acceptance of Tax Reform," Journal of the American Taxation Association, published twice a year by the AAA, by James J. Maroney, Cynthia M. Jackson, Timothy J. Rupert, and Yue (May) Zhang, Spring 2012
Access us not free even for AAA members

This study examines the extent to which investor-level taxes affect the pricing and pre-tax returns of securities. Specifically, we investigate whether the pre-tax yield on outstanding conventional preferred stock (CPS) decreased after the 2003 Jobs and Growth Tax Relief Reconciliation Act (JGTRRA) reduced the individual's tax rate on dividends. Our research design for detecting tax effects is strong for two reasons: (1) JGTRRA provides a quasi-experimental setting that permits a pre/post design, and (2) we use trust preferred stock (TPS) issued by the same firm as the tax-disfavored benchmark asset, which permits a matched-pair design that controls for risk. Additional tests including CPS issues without TPS counterparts confirm the effect of JGTRRA on CPS issues. The results indicate that investors reacted to the new tax-favored status of CPS by bidding up its price, which lowered its yield.

"AAA PUBLISHES STUDY ON AMERICANS' WILLINGNESS TO SACRIFICE TO SAVE SOCIAL SECURITY," by Bob Schneider, AccountingEducation.com, May 2012 ---
http://www.accountingeducation.com/index.cfm?page=newsdetails&id=151976

The authors state: "Given the urgency of this problem, our results may provide some guidance to policy makers as they consider possible reforms to the Social Security system and how to communicate the need for these reforms to taxpayers...While this information may heighten taxpayers' concern about the sustainability of the system, it also appears to increase their acceptance of traditionally unpopular reform measures." Accrual-basis information might also be featured, they add, in "the annual benefits statement ('Your Social Security Statement') sent to workers, so as to alert taxpayers about the financial condition of the Social Security fund."

Adds Prof. Rupert: "The crux of our study is that people could very well respond to a clear and forthright presentation of this problem much as they have responded in the past to such national crises as wars or natural disasters."

The new study consists of an
experiment involving 159 undergraduate and graduate accounting students, "an important group to examine," in the researchers' words, "because it is likely that family and friends would seek their expertise and guidance to help them understand potential tax reform measures." Subjects were randomly divided into three groups as part of a project, they were told, "to study taxpayers' opinions about the Social Security system and taxpayers' attitudes about potential changes to the Social Security system."

In conclusion, the professors put it this way: "Participants in our study appear to be exhibiting self-sacrificing behavior rather than self-interested behavior as their concern about Social Security's sustainability increases...However, we also find that as the participants' concern...reaches an extremely high level, their willingness to accept a larger share of the burden needed to reform the Social Security system begins to decline. Our finding is also consistent with [the] suggestion that when a crisis is believed to be so overwhelming as to induce feelings of helplessness, it may lead to self-interested behavior. If self-interested behavior does arise, Congress may need to act very soon to reform the Social Security and Medicare systems before younger taxpayers begin to believe the system is beyond repair."

Bob Jensen's threads on entitlements ---
http://www.trinity.edu/rjensen/Entitlements.htm


Developmental Research and the Pathways Commission Initiatives

2012 "Final" Pathways Commission Report ---
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf
Also see a summary at
"Accounting for Innovation," by Elise Young, Inside Higher Ed, July 31, 2012 ---
http://www.insidehighered.com/news/2012/07/31/updating-accounting-curriculums-expanding-and-diversifying-field

The AAA's Pathways Commission Accounting Education Initiatives Make National News
Accountics Scientists Should Especially Note the First Recommendation

"Accounting for Innovation," by Elise Young, Inside Higher Ed, July 31, 2012 ---
http://www.insidehighered.com/news/2012/07/31/updating-accounting-curriculums-expanding-and-diversifying-field

Accounting programs should promote curricular flexibility to capture a new generation of students who are more technologically savvy, less patient with traditional teaching methods, and more wary of the career opportunities in accounting, according to a report released today by the Pathways Commission, which studies the future of higher education for accounting.

In 2008, the U.S. Treasury Department's  Advisory Committee on the Auditing Profession recommended that the American Accounting Association and the American Institute of Certified Public Accountants form a commission to study the future structure and content of accounting education, and the Pathways Commission was formed to fulfill this recommendation and establish a national higher education strategy for accounting.

In the report, the commission acknowledges that some sporadic changes have been adopted, but it seeks to put in place a structure for much more regular and ambitious changes.

The report includes seven recommendations:

According to the report, its two sponsoring organizations -- the American Accounting Association and the American Institute of Certified Public Accountants -- will support the effort to carry out the report's recommendations, and they are finalizing a strategy for conducting this effort.

Hsihui Chang, a professor and head of Drexel University’s accounting department, said colleges must prepare students for the accounting field by encouraging three qualities: integrity, analytical skills and a global viewpoint.

“You need to look at things in a global scope,” he said. “One thing we’re always thinking about is how can we attract students from diverse groups?” Chang said the department’s faculty comprises members from several different countries, and the university also has four student organizations dedicated to accounting -- including one for Asian students and one for Hispanic students.

He said the university hosts guest speakers and accounting career days to provide information to prospective accounting students about career options: “They find out, ‘Hey, this seems to be quite exciting.’ ”

Jimmy Ye, a professor and chair of the accounting department at Baruch College of the City University of New York, wrote in an email to Inside Higher Ed that his department is already fulfilling some of the report’s recommendations by inviting professionals from accounting firms into classrooms and bringing in research staff from accounting firms to interact with faculty members and Ph.D. students.

Ye also said the AICPA should collect and analyze supply and demand trends in the accounting profession -- but not just in the short term. “Higher education does not just train students for getting their first jobs,” he wrote. “I would like to see some study on the career tracks of college accounting graduates.”

Mohamed Hussein, a professor and head of the accounting department at the University of Connecticut, also offered ways for the commission to expand its recommendations. He said the recommendations can’t be fully put into practice with the current structure of accounting education.

“There are two parts to this: one part is being able to have an innovative curriculum that will include changes in technology, changes in the economics of the firm, including risk, international issues and regulation,” he said. “And the other part is making sure that the students will take advantage of all this innovation.”

The university offers courses on some of these issues as electives, but it can’t fit all of the information in those courses into the major’s required courses, he said.

Continued in article

2012 "Final" Pathways Commission Report ---
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf

Jensen Comment
This is one of the most important initiatives to emerge from the AAA in recent years.

I would like to be optimistic, but change will be very slow. President Wilson, who was also an PhD professor, once remarked that it's easier to move a cemetery than to change a university.

It is easier to move a cemetery than to affect a change in curriculum.
Woodrow Wilson

President of Princeton University 1902-1910
President of the United States 1913-1921

And in the 21st Century you can imagine the lawsuits that would clog the courts if a town tried to move a cemetery.

Bob Jensen's threads on Higher Education Controversies and Need for Change ---
http://www.trinity.edu/rjensen/HigherEdControversies.htm

The sad state of accountancy doctoral programs ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

July 31, 2012 reply from Paul Williams

Bob, A good place to start is to jettison pretenses of accounting being a science. As Anthony Hopwood noted in his presidential address, accounting is a practice. The tools of science are certainly useful, but using those tools to investigate accounting problems is quite a different matter than claiming that accounting is a science. Teleology doesn't enter the picture in the sciences -- nature is governed by laws, not purposes. Accounting is nothing but a purposeful activity and must (as Jagdish has eloquently noted here and in his Critical Perspectives on Accounting article) deal with values, law and ethics. As Einstein said, "In nature there are no rewards or punishments, only consequences." For a social practice like accounting to pretend there are only consequences (as if economics was a science that deals only with "natural kinds) has been a major failing of the academy in fulfilling its responsibilities to a discipline that also claims to be a profession. In spite of a "professional economist's" claims made here that economics is a science, there is quite some controversy over that even within the economic community. Ha-Joon Chang, another professional economist at Cambridge U. had this to say about the economics discipline: "Recognizing that the boundaries of the market are ambiguous and cannot be determined in an objective way lets us realize that economics is not a science like physics or chemistry, but a political exercise. Free-market economists may want you to believe that the correct boundaries of the market can be scientifically determined, but this is incorrect. If the boundaries of what you are studying cannot be scientifically determined what you are doing is not a science (23 Things They Don't Tell You About Capitalism, p. 10)." The silly persistence of professional accountants in asserting that accounting is apolitical and aethical may be a rationalization they require, but for academics to harbor the same beliefs seems to be a decidedly unscientific posture to take. In one of Ed Arrington's articles published some time ago, he argued that accounting's pretenses of being scientific are risible. As he said (as near as I can recall): "Watching the positive accounting show, Einstein's gods must be rolling in the aisles."


August 23, 2012 message from Bob Jensen
Hi Steve,

No I cannot take up your challenge, because you've defined research relevant to practitioners in such a way that that does not create a subset of "developmental research" that sets it aside from accountics basic and applied research in general. Accountics research is accounting research containing mathematical equations and/or statistical inference tables (and contains the word accounting).
Your definitional criterion is of "accounting research relevant to practitioners" reads as follows"

 
"Any accounting research that "helps those who rely on the producers of accounting information to gain a better understanding of the nature and consequences of that reliance."

 

In the eyes of accountics researchers, all of their findings help producers of accounting information to gain a better understanding" such that it would be impossible to find any accountics research publication that does not meet your definition.


If you want me to to take up your challenge, you will have to be more specific along the lines of accounting research that separates developmental research from basic and applied research. .

The Frascati Manual --- http://en.wikipedia.org/wiki/Frascati_Manual
This Frascati Manual outlines three forms of research. These are basic research, applied research and experimental development:


I think the "allied research" definition above is too vague for academic accounting research except possibly in the realm of AIS and related research such as in XBRL.

If you allow me to define the accounting research intended to be relevant to practitioners as being "experimental development" (where experimental can be a sample of one, two, or a few) as defined above in actual companies, then I can take up your challenge.


I would rather, however, that you list five accountics research studies that meet the "experimental development" definition above as applied in actual companies.

One non-accountics illustration that at one time was in the "experimental development" stage was the Balanced Score Card model. Since this model is now used by over half the large corporations in the U.S. (according to Tom Klammer), I think this model moved beyond the experimental stage. Other examples, include the various early experiments with ABC costing in companies, many of which were written up in practitioner journals.

On the accountics side, we might include the various models that are employed in industry to value financial instruments and derivative financial instruments. For example, even though it is often unsuited for option valuing applications, the Black-Scholes model met the test of actually being experimented with at the company level ---
http://www.cpa-connecticut.com/sfas123r.html
This summary is written by a practitioner.

I think a better practical model is as follows:
"How to “Excel” at Options Valuation," by Charles P. Baril, Luis Betancourt, and John W. Briggs, Journal of Accountancy, December 2005 --- http://www.aicpa.org/pubs/jofa/dec2005/baril.htm
This is one of the best articles for accounting educators on issues of option valuation!


Conclusion
I await your list five other accountics research studies that meet the "experimental development" definition above. I know you have one in mind that is an actual application of branstorming in a real-world accounting firm. How about four others?

Then I will follow up by listing five accounting research studies that do not meet the "experimental development" definition above.

Respectfully,
Bob Jensen

August 23, 2012 message from David Johnstone

 

Dear all, I feel that Steve is right that practitioners in accounting are naturally ambivalent, if not tacitly defensive or even hostile, to research. Accounting practitioners make a lot of money for their firms and clients by producing answers that someone holding the purse strings likes. The Chicago economist George Stigler who won a Nobel prize described economists as having a role as hired guns. Their fee attracting ability is to reason backwards to assumptions upon which the desired conclusion stands up. Accountants have more of this in their role. If you are an engineer and you want to please the boss by designing a more efficient structure, you really want sophisticated engineering research. But if you are an accountant and you want to produce a certain valuation of a firm or a certain income number, ignorance of restrictive research findings can often be of great assistance. Positivist research that casts the profession in a poor light (e.g. by predicting certain self-interested professional behaviours) will definitely not be welcome reading.

If “true income” existed in a physical sense in the way that relevant variables in say engineering or medicine exist (e.g. true weight or true existence of infection) then there would have to be more interest from the profession into research. But when notions like “income” and “value” are so indefinite and non-existent in anything resembling a physical sense, it is little wonder that the profession enjoys its ability to work freely and often quite mechanically.

The dependence of so many accounting tasks, such as valuation, on forecasts of future events (sales, costs, economy, asset prices, etc.) would seem to make forecasting models and research of real interest to accountants. But again, these might produce results that don’t suit the desired conclusion, and then there is the underlying issue that forecasts of such events are not in general anywhere near the sophistication or accuracy of weather forecasts (i.e. forecasts in a physical rather than social domain. And then there are some who would say that forecasting is “not accounting” (as if any discipline has defined borders).

 

David

 

Hi David,

So should we just ignore the Pathways Commission initiatives and have more parades for the accountics science researchers who publish articles in TAR, JAR, and JAE? Or should we try to conduct developmental research that might interest the practicing profession in our research? I think accountics researchers are so enjoying their high salaries and status that they see no need to take up the Pathways Commission initiatives. This is sad!

We only have to look at the entire program of plenary sessions at the 2012 AAA Annual Meetings to see that accountics researchers have little interest in developmental research for the profession.  Not one plenary session was devoted to developmental research or the accounting profession in general.

Your conclusion, David, in a way defies the history of the American Accounting Association. For the first 40 or so years of its life there were more practitioner members than accounting professor members in the AAA. Practitioners published frequently in The Accounting Review and had very, very close ties with accounting professors on developmental research issues. Normative research was king of the hill --- “An Analysis of the Evolution of Research Contributions by The Accounting Review: 1926-2005,” by Jean Heck and Robert E. Jensen, Accounting Historians Journal,, Volume 34, No. 2, December 2007, pp. 109-142

In the 1960s, accountics scientists took over the AAA and commenced calling normative research bullshit. More importantly, prestigious accountics researchers no longer associated much with practitioners. Accordingly practitioner membership in the AAA plummeted. The Accounting Review and the emerging JAR and JAE would not publish developmental research. Accountics researhers literally took over all accounting doctoral  programs. To have any prestige at all, accounting research publications had to have equations and/or statistical inference tables even when the samples really were not random samples.

At the 2012 AAA Annual Meetings the large accounting firms did have concurrent session presentations on developmental research that they would like to see academic accounting researchers address.

I was told that accountics researchers showed very little interest in those developmental research idea sessions, although I have to admit that I also had conflicts that prevented me from attending. I had to return a day early and did not attend any of the Wednesday sessions.  If the TAR, JAR, and JAE had sections in their journals  for experimental developmental research, I suspect there would've been more interest in those AAA sessions sponsored by the large firms.

I do think that, if accounting researchers commenced communications with the firms about developmental research and if the accounting doctoral programs had developmental research tracks (including Case Method tracks), interest in experimental development research in corporations and accounting firms might pick up.

Accounting firms themselves will probably be cooperative, especially if some of this research was coordinated in some way through the AAA. The AICPA funded the AAA Notable Contributions to the Accounting Literature Award in part to stimulate developmental research but, unfortunately, did not make developmental research a condition for granting the award. Sadly, nearly all the awards then went to accountics researchers who did not do developmental research in companies and CPA firms.

One of the more successful initiatives linking professors with practitioners over the years are the Trueblood Case Seminars funded by Deloitte where accounting professors meet (usually in Scottsdale) in teams with practitioners to develop cases drawn from accounting practice. These cases, however, were mostly teaching cases that would not qualify under the heading developmental research.

Another effort was the AICPA's case competitions where at least one author had to be a professor and one author had to be a practitioner. I and my partners (Bruce Sidlinger and John Howland) were winners of the 1998 AICPA's Academic/Practitioner Case Competition.  On the first round the case was one of 12 in the nation accepted for Publication by the AICPA.  It was then selected as one of six to be presented at the AICPA's Accounting Educators Conference in McLean, Virginia, November 7, 1998.

But TAR, JAR, or JAE most likely would've refused to even send those AICPA Case Competition winners out for review. As a result, our AICPA publication counted much less in my performance reviews at Trinity University (not that I'm complaining about how well Trinity overpaid me during 24 years).

I was a part of the accountics science takeover from get go in the 1960s. Every article I ever published in TAR or JAR had equations. Years and years later I was a late comer in realizing how separated accountics researchers became from the practitioner world and how limited our research findings were for developmental progress in the accounting profession.

Granof and Zeff state the history better than me.


 

"Research on Accounting Should Learn From the Past"
by Michael H. Granof and Stephen A. Zeff
Chronicle of Higher
Education, March 21, 2008

Starting in the 1960s, academic research on accounting became methodologically supercharged — far more quantitative and analytical than in previous decades. The results, however, have been paradoxical. The new paradigms have greatly increased our understanding of how financial information affects the decisions of investors as well as managers. At the same time, those models have crowded out other forms of investigation. The result is that professors of accounting have contributed little to the establishment of new practices and standards, have failed to perform a needed role as a watchdog of the profession, and have created a disconnect between their teaching and their research.

Before the 1960s, accounting research was primarily descriptive. Researchers described existing standards and practices and suggested ways in which they could be improved. Their findings were taken seriously by standard-setting boards, CPA's, and corporate officers.

A confluence of developments in the 1960s markedly changed the nature of research — and, as a consequence, its impact on practice. First, computers emerged as a means of collecting and analyzing vast amounts of information, especially stock prices and data drawn from corporate financial statements. Second, academic accountants themselves recognized the limitations of their methodologies. Argument, they realized, was no substitute for empirical evidence. Third, owing to criticism that their research was decidedly second rate because it was insufficiently analytical, business faculties sought academic respectability by employing the methods of disciplines like econometrics, psychology, statistics, and mathematics.

In response to those developments, professors of accounting not only established new journals that were restricted to metric-based research, but they limited existing academic publications to that type of inquiry. The most influential of the new journals was the Journal of Accounting Research, first published in 1963 and sponsored by the University of Chicago Graduate School of Business.

Acknowledging the primacy of the journals, business-school chairmen and deans increasingly confined the rewards of publication exclusively to those publications' contributors. That policy was applied initially at the business schools at private colleges that had the strongest M.B.A. programs. Then ambitious business schools at public institutions followed the lead of the private schools, even when the public schools had strong undergraduate and master's programs in accounting with successful traditions of practice-oriented research.

The unintended consequence has been that interesting and researchable questions in accounting are essentially being ignored. By confining the major thrust in research to phenomena that can be mathematically modeled or derived from electronic databases, academic accountants have failed to advance the profession in ways that are expected of them and of which they are capable.

Academic research has unquestionably broadened the views of standards setters as to the role of accounting information and how it affects the decisions of individual investors as well as the capital markets. Nevertheless, it has had scant influence on the standards themselves.

The research is hamstrung by restrictive and sometimes artificial assumptions. For example, researchers may construct mathematical models of optimum compensation contracts between an owner and a manager. But contrary to all that we know about human behavior, the models typically posit each of the parties to the arrangement as a "rational" economic being — one devoid of motivations other than to maximize pecuniary returns.

Moreover, research is limited to the homogenized content of electronic databases, which tell us, for example, the prices at which shares were traded but give no insight into the decision processes of either the buyers or the sellers. The research is thus unable to capture the essence of the human behavior that is of interest to accountants and standard setters.

Further, accounting researchers usually look backward rather than forward. They examine the impact of a standard only after it has been issued. And once a rule-making authority issues a standard, that authority seldom modifies it. Accounting is probably the only profession in which academic journals will publish empirical studies only if they have statistical validity. Medical journals, for example, routinely report on promising new procedures that have not yet withstood rigorous statistical scrutiny.

Floyd Norris, the chief financial correspondent of The New York Times, titled a 2006 speech to the American Accounting Association "Where Is the Next Abe Briloff?" Abe Briloff is a rare academic accountant. He has devoted his career to examining the financial statements of publicly traded companies and censuring firms that he believes have engaged in abusive accounting practices. Most of his work has been published in Barron's and in several books — almost none in academic journals. An accounting gadfly in the mold of Ralph Nader, he has criticized existing accounting practices in a way that has not only embarrassed the miscreants but has caused the rule-making authorities to issue new and more-rigorous standards. As Norris correctly suggested in his talk, if the academic community had produced more Abe Briloffs, there would have been fewer corporate accounting meltdowns.

The narrow focus of today's research has also resulted in a disconnect between research and teaching. Because of the difficulty of conducting publishable research in certain areas — such as taxation, managerial accounting, government accounting, and auditing — Ph.D. candidates avoid choosing them as specialties. Thus, even though those areas are central to any degree program in accounting, there is a shortage of faculty members sufficiently knowledgeable to teach them.

To be sure, some accounting research, particularly that pertaining to the efficiency of capital markets, has found its way into both the classroom and textbooks — but mainly in select M.B.A. programs and the textbooks used in those courses. There is little evidence that the research has had more than a marginal influence on what is taught in mainstream accounting courses.

What needs to be done? First, and most significantly, journal editors, department chairs, business-school deans, and promotion-and-tenure committees need to rethink the criteria for what constitutes appropriate accounting research. That is not to suggest that they should diminish the importance of the currently accepted modes or that they should lower their standards. But they need to expand the set of research methods to encompass those that, in other disciplines, are respected for their scientific standing. The methods include historical and field studies, policy analysis, surveys, and international comparisons when, as with empirical and analytical research, they otherwise meet the tests of sound scholarship.

Second, chairmen, deans, and promotion and merit-review committees must expand the criteria they use in assessing the research component of faculty performance.

They must have the courage to establish criteria for what constitutes meritorious research that are consistent with their own institutions' unique characters and comparative advantages, rather than imitating the norms believed to be used in schools ranked higher in magazine and newspaper polls. In this regard, they must acknowledge that accounting departments, unlike other business disciplines such as finance and marketing, are associated with a well-defined and recognized profession. Accounting faculties, therefore, have a special obligation to conduct research that is of interest and relevance to the profession. The current accounting model was designed mainly for the industrial era, when property, plant, and equipment were companies' major assets. Today, intangibles such as brand values and intellectual capital are of overwhelming importance as assets, yet they are largely absent from company balance sheets. Academics must play a role in reforming the accounting model to fit the new postindustrial environment.
 

Third, Ph.D. programs must ensure that young accounting researchers are conversant with the fundamental issues that have arisen in the accounting discipline and with a broad range of research methodologies. The accounting literature did not begin in the second half of the 1960s. The books and articles written by accounting scholars from the 1920s through the 1960s can help to frame and put into perspective the questions that researchers are now studying.

For example, W.A. Paton and A.C. Littleton's 1940 monograph, An Introduction to Corporate Accounting Standards, profoundly shaped the debates of the day and greatly influenced how accounting was taught at universities. Today, however, many, if not most, accounting academics are ignorant of that literature. What they know of it is mainly from textbooks, which themselves evince little knowledge of the path-breaking work of earlier years. All of that leads to superficiality in teaching and to research without a connection to the past.

We fervently hope that the research pendulum will soon swing back from the narrow lines of inquiry that dominate today's leading journals to a rediscovery of the richness of what accounting research can be. For that to occur, deans and the current generation of academic accountants must give it a push.

Michael H. Granof is a professor of accounting at the McCombs School of Business at the University of Texas at Austin. Stephen A. Zeff is a professor of accounting at the Jesse H. Jones Graduate School of Management at Rice University.

March 18, 2008 reply from Paul Williams [Paul_Williams@NCSU.EDU]


Steve Zeff has been saying this since his stint as editor of The Accounting Review (TAR); nobody has listened. Zeff famously wrote at least two editorials published in TAR over 30 years ago that lamented the colonization of the accounting academy by the intellectually unwashed. He and Bill Cooper wrote a comment on Kinney's tutorial on how to do accounting research and it was rudely rejected by TAR. It gained a new life only when Tony Tinker published it as part of an issue of Critical Perspectives in Accounting devoted to the problem of dogma in accounting research.

It has only been since less subdued voices have been raised (outright rudeness has been the hallmark of those who transformed accounting into the empirical sub-discipline of a sub-discipline for which empirical work is irrelevant) that any movement has occurred. Judy Rayburn's diversity initiative and her invitation for Anthony Hopwood to give the Presidential address at the D.C. AAA meeting came only after many years of persistent unsubdued pointing out of things that were uncomfortable for the comfortable to confront.

Paul Williams
paul_williams@ncsu.edu 
(919)515-4436
 

 

 

Jensen Comment
I predict that once again efforts motivate top accountics scientists to leave campus to collect data in companies and focus on developmental issues will fail big time. It's not that accountics science is a bad thing. Many of the papers published in TAR, JAR, and JAE are outstanding. But these are basic and applied research findings that have little or nothing to do with developmental research in the accounting profession.

Accountics scientists just don't care about developmental research ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
And the accounting profession just does not care about accountics science research.
Efforts to bridge the gap are still failing.

In 16 years as a professor of accounting at the University of Georgia, Denny Beresford was invited one time to speak to an audience of accountics science faculty and doctoral students. He was probably not invited at any other time because he would probably talk about accounting. Talking about accounting in an accountics research seminar is about the same thing as talking in Swahili.

We only have to look at the entire program of plenary sessions at the 2012 AAA Annual Meetings to see this.  Not one plenary session was devoted to developmental research or the accounting profession in general.

Respectfully,
Bob Jensen




The December 2012 issue of Accounting Horizons has four commentaries under the heading
Essays on the State of Accounting Scholarship
These essays could not be published in The Accounting Review because they do not contain the required equations for anything published in TAR.
I think we owe Accounting Horizons Editor Dana Hermanson an applause for making "Commentaries" a major section in each issue of AH. Hopefully this will be carried forward by new AH Editors Paul Griffin and Arnold Wright.

2012 "Final" Pathways Commission Report ---
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf
Also see a summary at
"Accounting for Innovation," by Elise Young, Inside Higher Ed, July 31, 2012 ---
http://www.insidehighered.com/news/2012/07/31/updating-accounting-curriculums-expanding-and-diversifying-field

A huge disappointment to me was that none of the essay authors quoted or even referenced the 2012 Pathways Commission Report, which once again illustrates how the mere mention of the Pathways Commission Report sends accountics scientists running for cover. Several of the Pathways Commission Report are as follows:

"Accounting for Innovation," by Elise Young, Inside Higher Ed, July 31, 2012 ---
http://www.insidehighered.com/news/2012/07/31/updating-accounting-curriculums-expanding-and-diversifying-field

Accounting programs should promote curricular flexibility to capture a new generation of students who are more technologically savvy, less patient with traditional teaching methods, and more wary of the career opportunities in accounting, according to a report released today by the Pathways Commission, which studies the future of higher education for accounting.

In 2008, the U.S. Treasury Department's  Advisory Committee on the Auditing Profession recommended that the American Accounting Association and the American Institute of Certified Public Accountants form a commission to study the future structure and content of accounting education, and the Pathways Commission was formed to fulfill this recommendation and establish a national higher education strategy for accounting.

In the report, the commission acknowledges that some sporadic changes have been adopted, but it seeks to put in place a structure for much more regular and ambitious changes.

The report includes seven recommendations:

According to the report, its two sponsoring organizations -- the American Accounting Association and the American Institute of Certified Public Accountants -- will support the effort to carry out the report's recommendations, and they are finalizing a strategy for conducting this effort.

Continued in article

Pathways Commission Final Report ---
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf

 

In spite of not acknowledging the Pathways Commission Report, however, the various essay authors did in one way or another pick up on the major resolutions of the Pathways Commission Report. In particular the essays urge greater diversity of research methodology in academic accounting research. 

Since the theme of the essays is "scholarship" rather than just research, I would have hoped that the authors would have devoted more attention to the following Pathways Commission Report resolutions:

 

But it's unfair on my part to dwell on what the essay authors do not do. What's more important is to focus on what they accomplish, and I think they accomplish a lot. It's very important that we keep the Pathways Commission Report and these four essays momentum moving until we finally shake the bonds of narrow minded chains of binding our faculty hiring, doctoral programs curricula, and article acceptance practices of our leading academic research journals.

I particularly admire these essay authors for acknowledging the seeds of change planted by earlier scholars.

Bob Jensen's threads on the needs for change are at the following links:

 

What went wrong in accounting/accountics research? 
How did academic accounting research become a pseudo science?
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong
 

Why must all accounting doctoral programs be social
science (particularly econometrics) "accountics" doctoral programs?

Why accountancy doctoral programs are drying up and
why accountancy is no longer required for admission or
graduation in an accountancy doctoral program
http://www.trinity.edu/rjensen/theory01.htm#DoctoralPrograms

 

574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

 

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
 

 

 

Comments on the AECM on each of these four essays may help further the cause of change in accounting academia.

 

 

"Introduction for Essays on the State of Accounting Scholarship," Gregory B. Waymire, Accounting Horizons, December 2012, Vol. 26, No. 4, pp. 817-819 ---
 http://aaajournals.org/doi/full/10.2308/acch-50236

. . .

CHARGE GIVEN TO PRESENTERS AND ATTENDEES AT THE 2011 AAA STRATEGIC RETREAT

The presenters and attendees at the retreat were asked to consider the following:

Assertion: Accounting research as of 2011 is stagnant and lacking in significant innovation that introduces fresh ideas and insights into our scholarly discipline.

Questions: Is this a correct statement? If not, why? If so, what factors have led to this state of affairs, what can be done to reverse it, and what role, if any, should AAA play in this process?

In terms of presenters, I sought a variety of scholarly perspectives within the accounting academy. I ended up asking the four scholars whose essays follow to speak for 30 minutes on the assertion and questions given above. These scholars represent different areas of accounting research and employ different methodologies in their research. They also are thoughtful people who consider issues of scholarship from long histories of personal experience at different types of universities for their current positions and their doctoral education.

Attendees at the retreat also included members of the Executive Committee. In addition, incoming co-chairs of the Annual Meeting (Anil Arya and Rick Young), Doctoral Consortium (Sudipta Basu and Ilia Dichev), and New Faculty Consortium (Kristy Towry and Mohan Venkatachalam) Committees of AAA were invited to attend.

The primary purpose of the May retreat was “idea generation.” That is, what can we do together as scholars to increase the long-run viability of our discipline? My view was that the retreat and the specific comments by the presenters would provide a basis for a longer-term conversation about the future of accounting scholarship and the role of AAA within that future.



 
SUBSEQUENT EVENTS

Several subsequent events have provided opportunities to continue the conversation about scholarly innovation in accounting. First, I spoke at the AAA Annual Meeting in Denver, August 2011, to update the membership about the initiative now titled “Seeds of Innovation in Accounting Scholarship.” That presentation and the related slides can now be found on AAA Commons (http://commons.aaahq.org/hives/a3d1bee423/summary, or simply www.seedsofinnovation.org). Second, I have written up my own views on these issues and integrated them with the preliminary suggestions developed at the May 2011 retreat (Waymire 2012). Third, further discussion has taken place in the AAA Board and, more importantly, in the new AAA Council. The Council discussion will be ongoing this year, and I expect to form a task force that will consist of Council members and others to develop more specific proposals in January 2012. My hope is that these proposals will cover a broad range of areas that involve AAA publications, consortia, and meetings, and help guide AAA over the next several years as we seek to improve the quality of the accounting discipline.

 

"Framing the Issue of Research Quality in a Context of Research Diversity," by Christopher S. Chapman, Accounting Horizons, December 2012, Vol. 26, No. 4, pp. 821-831 ---
http://aaajournals.org/doi/full/10.2308/acch-10314

The current editorial policy of The Accounting Review states “The scope of acceptable articles should embrace any research methodology and any accounting-related subject, as long as the articles meet the standards established for publication in the journal.” The policy concludes with the statement “The journal is also open to all rigorous research methods.” Private journals are rightly entitled to set as selective an editorial policy as they think proper. An association journal, however, should rightly be expected to maintain an open policy that does not ex ante privilege one form of research over another. In that respect, the clearly stated policy of The Accounting Review of seeking “any” and “all” is admirable. However, the continuing need to make the case for research diversity is disappointing given the longstanding recognition of the dangers of narrowness:

Reinforcing the above [stagnation and decline of accounting research] is a tendency for senior accounting academics to judge and reward the performance of juniors on the basis of a narrow definition of what constitutes academic accounting. (Demski et al. 1991, 4–5)

With regard to The Accounting Review, recent years have seen considerable efforts to enhance the diversity of research appearing in its pages. These efforts have undoubtedly resulted in a higher level of research diversity than that seen for most of the period since the current editorial policy was published in 1989. In conference panels and other arenas of debate, the case has been put that a journal can only publish as diverse sets of papers as are submitted to it. Detailed reports of submissions and acceptance rates are now prepared and published, demonstrating success in this regard. The issue that continues to divide is that of the requisite diversity of an editorial board to encourage the submission of kinds of work that currently remain unsubmitted. Underlying the continuing debates over this aspect of diversity is disagreement over the implications of the caveat in the editorial policy, “as long as the articles meet the standards established for publication in the journal.”

Debates around this topic all too easily reduce to a false dichotomy between diversity and quality, with diversity perceived as a threat to quality. Increased diversity promises to increase the quality of the body of accounting research, however. Accounting is a complex social phenomenon, and so our understanding of it should be enhanced through the adoption of a diverse set of research perspectives and approaches. Grasping accounting in all its complexity is important from an intellectual perspective, but also from the perspective of the ability of our research discipline to contribute back to society (e.g., Flyvbjerg 2001). Diversity of research approaches requires diversity in the proper estimation of quality and validity of research, however (Ahrens and Chapman 2006).

To help structure my arguments around this central issue of the relationship between research diversity and quality, I offer two frameworks in the sections that follow. In doing so, I hope to help us to move toward a situation in which research diversity in The Accounting Review (and other journals) may become taken-for-granted practice, as well as policy.



 
DIVERSITY FRAMED IN U.S.-DOMINANT CATEGORIES

The process of becoming a published researcher is arduous and complex. Along the way, we pick up a variety of tools and techniques. The expression “All-But-Dissertation” reminds us that while tools and techniques are necessary for successful research, they are not sufficient. Expertise and judgment are built up over years of reading, observing the efforts of others, and trying ourselves. Hopefully, as we go on, we become better able to make the fine judgments required to distinguish between creative and fruitful leeway in the application of established approaches, and their misapplication. We become experts in assessing the validity of the kinds of research with which we are familiar. Our hard-won understanding naturally offers the starting point for our engagement with different forms of research.

To illustrate this point, let us look at an attempt to understand research diversity drawn from outside the discipline of accounting. Figure 1 is a reproduction from the introduction from the editor to a special issue of the Journal of Financial Economics entitled “Complementary Research Methods.” This journal addresses a discipline that also has a particularly strong tradition of a particular kind of research; namely, economics-based capital markets research. The figure offers an organizing framework for considering different research methods in relation to this core audience. It distinguishes various kinds of research methods in two dimensions: first, through their use of privately or publicly available data, and second, through the large or small size of their data sets.

Approaches to research potentially vary in a vast number of ways. The point of the figure is to distill these down to a manageable number. Simplification is not per se a problem. Danger arises when the dimensions chosen privilege the interests of one particular group of researchers over those of another, however. Let us consider the designation of a case study as having a small sample size, for example. This framing has been seen also in accounting, with several journals in the past including “small sample” sections that published such work. However, as clearly put by Anderson and Widener (2007), this is to assume that the unit of analysis must always be company-level observations, and this need not be the case.

This figure offers a way for large sample, public data researchers to think about how other forms of research might complement (contribute to) their own activities. As such, this represents only a partial engagement in research diversity. The framing of Figure 1 adopts the interests of one subgroup. In a U.S. context, it is commonly understood that in-depth field studies might act as a precursor to subsequent testing through other methods (e.g., Merchant 2008). While field studies sometimes might play exactly this role, such work also has its own purposes that are debated and developed within broad (frequently interdisciplinary) communities of scholars. From the perspective of “complementarity,” as seen in Figure 1, these other purposes might be considered irrelevant (e.g., Merchant 2008). From the perspective of research diversity, and the building of a comprehensive understanding on the nature and effects of accounting, these intentions need no scholarly justification in relation to other forms of research.

In the next section, I will offer a second framework for considering research diversity from a perspective that is less overtly grounded in the assumptions of any particular subgroup of researchers.



 
DIVERSITY FRAMED IN TERMS OF DIFFERENT RESEARCH ASSUMPTIONS

The framework presented in Figure 2 sets out a different way to differentiate research based on its choices in two dimensions. The language of the figure is couched in terms of the philosophy of science and sociology; however, it is not new to the accounting literature (see, for example, Chua 1986). In its two dimensions, Figure 2 offers summary labels for sets of fundamental research choices, offering names for each possible combination of these sets of choices.

This second framework operates at a far higher level of abstraction than that seen in Figure 1. As previously noted, recent years have seen increases in the diversity of research published in The Accounting Review. That diversity notwithstanding, the entire contents of The Accounting Review since the publication of its current editorial statement (and the scope of research diversity implicit in the categories of Figure 1) fall within the bottom right-hand cell in this second framework—Functionalist research.

Continued in Article

"Accounting Craftspeople versus Accounting Seers: Exploring the Relevance and Innovation Gaps in Academic Accounting Research," by William E. McCarthy, Accounting Horizons, December 2012, Vol. 26, No. 4, pp. 833-843 ---
http://aaajournals.org/doi/full/10.2308/acch-10313 

SYNOPSIS:

Is accounting research stuck in a rut of repetitiveness and irrelevancy? I would answer yes, and I would even predict that both its gap in relevancy and its gap in innovation are going to continue to get worse if the people and the attitudes that govern inquiry in the American academy remain the same. From my perspective in accounting information systems, mainstream accounting research topics have changed very little in 30 years, except for the fact that their scope now seems much more narrow and crowded. More and more people seem to be studying the same topics in financial reporting and managerial control in the same ways, over and over and over. My suggestions to get out of this rut are simple. First, the profession should allow itself to think a little bit normatively, so we can actually target practice improvement as a real goal. And second, we need to allow new scholars a wider berth in research topics and methods, so we can actually give the kind of creativity and innovation that occurs naturally with young people a chance to blossom.

INTRODUCTION

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man
George Bernard Shaw (1903, Act IV)

Who provides you with the best feedback on your current set of teaching materials and research ideas? For me, at present, that ranked list would be: (1) knowledgeable and creative practitioners who are seeking to improve their field of practice, (2) young doctoral students and faculty from European or other non-American programs in business informatics, (3) a few of my own doctoral students from 15+ years ago, who teach and research in the same areas of accounting systems that I do, and (4) my own undergraduate and master's students. I do have systems, tax, and introductory colleagues who provide accounting context for me, but my feedback list has notable absences, like most of the mainstream Accounting and Information Systems faculty at Michigan State University (MSU) and, indeed, faculty throughout the U.S. accounting academy. Thirty years ago, those last two forums tolerated widespread diversity in both teaching and research ideas, but now those communities have coalesced into just a few approved “areas,” none of which provide me with assistance on my methodological and topical problems. Academic accounting most recently has been developing more and more into an insular and myopic community with no methodological and practice-oriented outsiders tolerated. Why is this?

Becoming aware of how this narrowing of the accounting mind has hindered not just accounting systems, but also academic accounting innovation in general, American Accounting Association (AAA) president Gregory Waymire asked for some “unreasonable” (in the Shavian sense quoted above) accounting academics like me to address the low-innovation and low-relevance problem in academic accounting. I promptly reframed this charge as a question: “Is accounting research stuck in a rut of repetitiveness and irrelevancy?” In the pages that follow, I intend to explore that question from two perspectives: (1) methodological, and (2) sociological. My inspiration for the first perspective is derived from Buckminster Fuller plus Alan Newell and Herbert Simon. For the second, my role model is Lee Smolin.



 
PUTTING A (LIMITED) NORMATIVE MINDSET BACK INTO ACCOUNTING RESEARCH—THE CASE FOR DESIGN SCIENCE AND BEYOND1

We should help create the future, not just study the past.
Paul Gray (Kock et al. 2002, 339)

In March of 2008, two very prominent and distinguished accounting academics—Michael H. Granof of The University of Texas and Stephen A. Zeff of Rice University—noted in The Chronicle of Higher Education that the research models that were being produced by accounting academics were indeed rigorous by the standards of statistical validity and logical positivism, but they were also of very little practical import:

Starting in the 1960s, academic research on accounting became methodologically supercharged … The results however have been paradoxical … [as] those models have crowded out other forms of investigation. The result is that professors of accounting have contributed little to the establishment of new practices and standards, have failed to perform a needed role as watchdog of the profession, and have created a disconnect between their teaching and research. (Granof and Zeff 2008, A34)

Professors Granof and Zeff (2008, A34) went on further to note that “accounting researchers usually look backward rather than forward” and that they, unlike medical researchers, seldom play a significant role in the practicing profession. In general, the thrust of the Granof and Zeff (2008) criticism was that the normative/positive pendulum in accounting research had swung too far toward rear-view empiricism and away from creation of promising new accounting methods, models, and constructs. They appealed directly for expanding the set of acceptable research methods to include those accepted in other disciplines well respected for their scientific standing. Additionally, Granof and Zeff (2008, A34) noted that because accounting faculties “are associated with a well-defined and recognized profession … [they] have a special obligation to conduct research that is of interest and relevance to [that] profession,” especially as the models of those practitioners evolve to fit new postindustrial environments.

Similar concerns were raised in the 1990s by the senior accounting scholar Richard Mattessich (1995, 183) in his treatise Critique of Accounting:

Academic accounting—like engineering, medicine, law, and so on—is obliged to provide a range of tools for practitioners to choose from, depending on preconceived and actual needs … The present gap between practice and academia is bound to grow as an increasing number of academics are being absorbed in either the modeling of highly simplified (and thus unrealistic) situations or the testing of empirical hypotheses (most of which are not even of instrumental nature). Both of these tasks are legitimate academic concerns, and this book must not be misinterpreted as opposing these efforts. What must be opposed is the one-sidedness of this academic concern and, even more so, the intolerance of the positive accounting theorists toward attempts of incorporating norms (objectives) into the theoretical accounting framework.

Mattessich, Zeff, and Granof were followed most recently in the same vein by Robert Kaplan (2011), who noted in the AAA 2010 Presidential Scholar Lecture that:

In my opinion, these weaknesses noted by Granof, Zeff, Mattessich, and Kaplan are attributable primarily to the insularity and myopia of the American-led accounting academy. Our research excludes practice and stifles innovation because of the way our journals, doctoral programs, and academic presentations are structured.

The Innovation Roadblock in Accounting Systems

The rear-view empiricism research malaise that all four of these scholars attribute to accounting as a whole is especially present in its technical subfield of accounting information systems (AIS). In fact, it is even more exaggerated, because as time goes on, an increasingly high percentage of AIS researchers aspire to develop reputations not in the field they teach (i.e., accounting systems), but in the accounting mainstream (i.e., financial reporting). Thus, they follow many of the misdirected paths described above, and their results are similarly disappointing. With some notable exceptions—primarily in work that involves semantic modeling of accounting phenomena or computerized monitoring and auditing—university-driven modernization in accounting systems has been virtually nonexistent since the 1970s, and what limited improvements that have occurred can be primarily attributed to the independent practice marketplace.

Continued in article

 

"Is Accounting Research Stagnant?" by Donald V. Moser, Accounting Horizons, December 2012, Vol. 26, No. 4, pp. 845-850 ---
http://aaajournals.org/doi/full/10.2308/acch-10312

INTRODUCTION

I accepted the invitation to present my thoughts to the American Accounting Association Executive Committee on whether accounting research has become stagnant for several reasons. First, I believe the question is important because the answer has widespread implications, one of which is the extent to which accounting research will remain an important part of the accounting academic profession in the years to come. In order to maintain the current stature of accounting research or to increase its importance, we need to ensure that we produce research that someone cares about. Second, there appears to be a growing sentiment among some accounting researchers that much of the research currently published in the top accounting journals is too similar, with too much emphasis on technique rather than on whether the research addresses an interesting or important question. My final reason was more self-serving. I thought this would provide a good opportunity to reflect on an important issue, and that committing to share my thoughts in a public forum would force me to give the issue the serious consideration it warrants. My comments below describe some conclusions I reached based on what others have written about this issue, discussions with colleagues, and my own reflections.



 
HAS ACCOUNTING RESEARCH STAGNATED?

My answer to the question of whether accounting research has become stagnant is a qualified “yes.” I qualify my answer because I do not believe that our research is entirely stagnant. Looking at the issue from a historical perspective, accounting research has, in fact, evolved considerably over time. In other words, as described quite eloquently recently by Hopwood (2007), Birnberg (2009), and Kaplan (2011), accounting research has an impressive history of change. While each of these scholars has their own views on what type of accounting research we should focus on now and in the future, each also describes a rich history of how we evolved to get where we are today.

In addition to the longer-term history of change, there has been substantial recent change in the perspectives reflected in accounting research and the topics now considered acceptable in accounting research. It was not that long ago that accounting studies that hypothesized or documented behavior that was inconsistent with the rational self-interest assumptions of neoclassical economics had a difficult time finding a publication outlet in the top accounting journals. Today, thanks mostly to the rise of behavioral economics, we see more experimental, analytical, and archival research that incorporates concepts from behavioral economics and psychology published in most of the top accounting journals. Recently, we have even seen work on neuroaccounting, which draws on findings from neuroscience, make its way into accounting journals (Dickhaut et al. 2010; Birnberg and Ganguly 2012). We also have seen new topics appear in published accounting research. For example, while there is a history of work on corporate social responsibility in Accounting, Organizations and Society, more recently, we have seen increased interest in such work as evidenced by articles published or forthcoming in The Accounting Review (Simnett et al. 2009; Balakrishnan et al. 2011; Dhaliwal et al. 2011; Kim et al. 2011; Dhaliwal et al. 2012; Moser and Martin 2012). In addition, The Harvard Business School, in collaboration with the Journal of Accounting and Economics, recently announced that they will host a conference on “Corporate Accountability Reporting” in 2013.1

However, despite evidence of both historical and more recent change, there is also considerable evidence of stagnation in accounting research. For example, despite some new topics appearing in accounting journals, a considerable amount of the published work still relates to a limited group of topics, such as earnings management, analysts' or management forecasts, compensation, regulation, governance, or budgeting. Researchers also mostly use the same research methods, with archival studies being most prevalent, and experimental studies running a distant second. The underlying theories used in mainstream U.S. accounting research are also quite limited, with conventional economic theory being the most commonly employed theory, but, as noted above, behavioral economic and psychological theories becoming more common in recent years. While the top accounting journals have become more open to new perspectives in recent years, the list of top journals has changed little, with the exception of the rise of the Review of Accounting Studies. Moreover, with the exception of some of the American Accounting Association journals, the top private U.S. accounting journals have mostly retained a somewhat narrow focus in terms of the type of research they typically publish. Finally, many published studies represent minor extensions of previous work, have limited or no tension in their hypotheses (i.e., they test what almost certainly must be true), have limited implications, and are metric or tool driven. Regarding the second-to-last item, i.e., limited implications, many studies now only claim to “extend the literature,” with no discussion of who, other than a limited number of other researchers working in the same area, might be interested in the study's findings. Regarding the last item, i.e., metric-driven research, some studies appear to be published simply because they used all the latest and best research techniques, even though the issue itself is of limited interest.

Of course, as with most issues, there are opposing views. Some accounting researchers disagree with the premise that our research is stagnant. Specifically, they believe that the methods and theories currently used are the best methods and theories, and that the top-ranked accounting journals are the best journals because they publish the best research. Under this view, there is little need for more innovative research. Whether such views are correct or simply represent a preference for the status quo is beyond the scope of this article. Suffice to say that my personal views on these issues are mixed, but I agree somewhat more with the view that accounting research is insufficiently innovative.



 
DETERRENTS TO INNOVATION IN ACCOUNTING RESEARCH

To the extent that accounting research lacks innovation, the question is what has brought us to this point? There appears to be considerable blame to spread around. One of the biggest culprits is the incentive system that accounting researchers face (Swanson 2004). In order to earn tenure or promotion, or even simply to receive an annual pay increase, researchers must publish in the top accounting journals and be cited by other researchers who publish in those same journals (Merchant 2010). Researchers' publication record and related citations depend critically on the views of editors and reviewers with status quo training and preferences, and the speed with which manuscripts make their way through the review process. Not surprisingly, this leads most researchers to limit the topics they study and make their studies as acceptable to status quo editors and reviewers as possible. This is the safest way to increase the number of papers published in top journals, which, in turn, increases the likelihood of citations by others who publish in those journals. Also, the constant pressures to publish more articles in top journals, teach more or new courses, improve teacher ratings, and provide administrative service to the school leaves little time for innovative research. It is easier to simply do more of the same because this increases the odds of satisfying the requirements of the school's incentive system.

A second impediment to innovative research is the way we train doctoral students. Too often, faculty advisors clone themselves. While such mentor relationships have many benefits, insisting that doctoral students view the world in the same way a faculty advisor does perpetuates the status quo. Also, most doctoral students take the same set of courses in economics, statistics, etc., and usually before they take accounting seminars. Again, while such methods training is essential, if all doctoral students take virtually all of the same courses, they are less likely to be exposed to alternative views of the world. Finally, in recent years, more doctoral students enter their programs with strong technical skills in economics, quantitative techniques, and statistical analysis, but many now lack professional accounting experience.2 Because such students prefer to engage in research projects that apply the skills they have, they tend to view research in terms of the techniques they can apply rather than stepping back to consider whether the research question is novel or important.

A third impediment to innovative research may involve the types of individuals who are attracted to accounting as a profession or research area. Accountants tend to like clarity and focus. Indeed, we often train our undergraduate or master's students to work toward a “right answer.” This raises the possibility that accountants are less innovative by nature than researchers in some other areas. Similarly, some accountants have a narrow definition of accounting. Some think of it as only financial accounting, and even those who define it more broadly as including managerial accounting, auditing, and tax, still tend to rigidly compartmentalize accounting into such functional areas. Such rigid categories limit the areas that accounting researchers consider to be appropriate for accounting research.

A final reason why accounting research is less innovative than it could be is that accounting researchers do not collaborate with researchers who employ different research methods or with researchers outside of accounting as often as they could. We tend to work with researchers who use the same research methods we do. That is, archival researchers typically collaborate with other archival researchers, and experimental researchers typically collaborate with other experimentalists. Moreover, only rarely do we branch out to work with researchers in other areas of business (e.g., organizational behavior, strategy, ethics, economics, or finance), and even less frequently with researchers from areas outside of business (e.g., psychology, decision sciences, law, political science, neuroscience, anthropology, or international studies).



 
WHAT CAN WE DO TO FOSTER INNOVATION?

To the extent that accounting research is less innovative than it could be for some or all of the reasons offered above, what can be done to change this? I divide my discussion of this issue into two categories: (1) actions that we, the broader research community, could take, and (2) actions that the American Accounting Association could take. Accounting faculty members at schools with doctoral programs could rethink how we recruit doctoral students. Currently, we tend to recruit students who have a good fit with research active faculty members who are likely to serve as the students' faculty advisor. Of course, this makes perfect sense because a mismatch tends to be very costly for both the student and the faculty advisor. On the other hand, this approach tends to produce clones of the faculty advisor. So, unless the faculty advisor values innovation, the chances that the doctoral student will propose or be allowed to pursue a new line of research are significantly reduced. Perhaps we need to assess prospective doctoral students, at least partially, on the novelty of their thinking. More importantly, we need to be more open to new ideas our students propose and encourage and support such ideas, rather than discourage novel thinking. Of course, a faculty advisor would be remiss not to explain the risks of doing something different, but along with explaining the risks, we could point out the potential rewards of being first out of the gate on a new topic and the personal sense of fulfillment that accompanies doing something you believe in and enjoy. Faculty advisors could also lead by example. Senior faculty could take some risks of their own to show junior faculty and doctoral students that this is acceptable rather than frowned upon.

Continued in article

 

"How Can Accounting Researchers Become More Innovative? by Sudipta Basu, Accounting Horizons, December 2012, Vol. 26, No. 4, pp. 851-87 ---
http://aaajournals.org/doi/full/10.2308/acch-10311 

We fervently hope that the research pendulum will soon swing back from the narrow lines of inquiry that dominate today's leading journals to a rediscovery of the richness of what accounting research can be. For that to occur, deans and the current generation of academic accountants must give it a push.—
Michael H. Granof and Stephen A. Zeff (2008)

Rather than clinging to the projects of the past, it is time to explore questions and engage with ideas that transgress the current accounting research boundaries. Allow your values to guide the formation of your research agenda. The passion will inevitably follow —
Joni J. Young (2009)


 

 

INTRODUCTION

Are most accounting academics and professionals excited when they receive the latest issue of The Accounting Review or an email of the Table of Contents? When I was a doctoral student and later an assistant professor, I looked forward to receiving new issues of top accounting journals. But as my research horizons widened, I found myself less interested in reading a recent issue of an accounting journal than one in a nearby discipline (e.g., Journal of Law and Economics), or even a discipline further away (e.g., Evolution and Human Behavior). Many accountants find little insight into important accounting issues in the top U.S. academic journals, which critics allege focus on arcane issues that interest a narrowing readership (e.g., Sterling 1976; Garcha et al. 1983; Flesher 1991; Heck and Jensen 2007).1

Several prominent scholars raise concerns about recent accounting research. Joel Demski's 2001 American Accounting Association (AAA) Presidential Address acknowledges the excitement of the mid-20th century advances in accounting research, but notes, “Of late, however, a malaise appears to have settled in. Our progress has turned flat, our tribal tendencies have taken hold, and our joy has diminished.” The state of current U.S. accounting scholarship has been questioned repeatedly by recent AAA presidents, including Judy Rayburn (2006), Shyam Sunder (2006), Sue Haka (2008), and Greg Waymire (2012).2

Assuming that when there is smoke there is likely a fire, I adopt a “glass-half-empty” lens.3 I diagnose the problems in our discipline after briefly outlining a few long-term causes for the symptoms identified by critics. I seek remedies for the more urgent symptoms, drawing upon examples from other disciplines that are exploring ways to reinvigorate scholarship and restore academic relevance. While a few of these can be implemented by AAA, many others can be adopted by journal editors and authors. I hope that these personal views stimulate conversations that lead to better accounting scholarship.

My main suggestion is to re-orient accounting researchers toward addressing fundamental accounting questions, and to provide awards and incentives for innovative leadership, rather than for passively following accounting standard-setters. This will require educating young scholars in accounting history as well as the history of accounting thought. In addition, AAA annual meetings should feature a named lecture by an eminent non-accounting scholar to expose us to new ideas and methods. We should rely less on statistical significance for assessing importance and instead emphasize practical significance in judging the value of a research contribution. Accounting research should be made more accessible to practitioners, interested laymen, and academic colleagues in other disciplines by improving readability—for example by making articles shorter and less jargon laden, and replacing tables with more informative figures. Finally, we should more actively seek out and explore accounting domains beyond those captured in machine-readable databases.



 
WHAT ARE THE SYMPTOMS? WHAT IS THE DIAGNOSIS?

Demski (2007) and Fellingham (2007) contend that accounting is not an academic research discipline that contributes knowledge to the rest of the university. This assertion is supported by predominantly one-way citation flows between accounting journals and those of neighboring disciplines (Lee 1995; Pieters and Baumgartner 2002; Bricker et al. 2003; Rayburn 2006). Such sentiments imply low status of the accounting professoriate within the academy, and echo those of Demski et al. (1991), Zeff (1989), Sterling (1973), and, from longer ago, Hatfield (1924). Furthermore, and perhaps of greater concern, accounting research has little impact on accounting practice, and the divergence between accounting research and accounting practice has been growing over the last half century (e.g., Langenderfer 1987; Baxter 1988; Bricker and Previts 1990).

What other symptoms have critics identified? Demski (2008) highlights the lack of passion in many accounting researchers, while Ball (2008) bemoans the “absence of a solidly grounded worldview—a deep understanding of the functioning of financial reporting in the economy” among accounting professors and doctoral students alike. Kaplan (2011) suggests that accounting research is predominantly conducted in an ivory tower with little connection to problems faced by practitioners, whereas Sunder (2007) argues that mandatory uniform standards suppress thinking among accounting researchers, echoing Baxter (1953). Kinney (2001) submits that accounting researchers are not sure about which research domains are ours. Demski et al. (1991) raised all these concerns previously, implying that accounting research has been stagnant for decades. No wonder I (and others) find too many recent accounting papers to be tedious and uninteresting.

A simplistic diagnosis is that U.S. accounting research mimics the concerns and mores of the U.S. accounting profession. The accounting profession in the middle of the 20th century searched for principles underlying accounting practices, which provided a demand for normative academic theories. These demands were met by accounting classics such as Gilman (1939), Paton and Littleton (1940), and Edwards and Bell (1961). Although standards were originally meant to guide accounting practice, standard-setters soon slid down the slippery slope of enforceable rules (Baxter 1979). Consequently, ever more detailed rules were written to make reported numbers more reliable. Bureaucrats wanted to uniformly enforce explicit protocols, which lawyers creatively interpreted and financial engineers circumvented with new contracts. In parallel, accounting researchers abandoned normative debates and turned to measuring and evaluating the effects of alternative accounting rules and attempts to evade them (e.g., Zeff 1978). In sum, as U.S. GAAP moved from norm based to rule based, or from emphasizing relevance to increasing uniformity and reliability, accounting researchers began favoring formal quantitative methods over informal qualitative arguments. As U.S. GAAP and the Internal Revenue Code became ever more arcane, so did U.S. accounting research.

Another diagnosis is that our current state stems from accounting trying to become a more scientific discipline. During 1956–1964, the Ford Foundation gave Carnegie Mellon, Chicago, Columbia, Harvard, and Stanford $14.4 million to try to make their business schools centers of excellence in research and teaching (Khurana et al. 2011). Contributions from other foundations raised the total to $35 million (Jeuck 1986), which would be about $268 million in 2012 dollars.4 The Ford Foundation espoused quantitative methods and economics with a goal of making business research more “scientific” and “professional” (Gordon and Howell 1959). Business schools responded by emphasizing statistical analyses and mathematical modeling, and mathematical training rather than accounting knowledge became increasingly required for publications in the top accounting journals (e.g., Chua 1996; Heck and Jensen 2007). While business researchers had some notable successes in the 1960s and 1970s soon after introducing these new techniques, the rate of innovation has allegedly since fallen.

Concurrently, U.S. business schools became credentialing machines guided by a “(student) customer is always right” ethos, so there was also less demand for accounting theory from accounting students and their employers (Demski 2007), and intermediate accounting textbooks replaced theory with rote memorization of rules (Zeff 1989).5 In 1967, the American Assembly of Collegiate Schools of Business (AACSB) increased the degree requirements for accredited accounting faculty from a master's-CPA combination to a Ph.D., effective in 1969. Many accounting doctoral programs were started in the 1960s to meet the new demand for accounting doctorates (Rodgers and Williams 1996), and these programs imitated the new elite accounting programs. Statistics, economics, and econometrics screening became requisite challenges (Zeff 1978), preceding accounting courses in many doctoral programs. Unsurprisingly then, doctoral students came to infer that accounting theory and institutional content are merely the icing on the cake of quantitative economics or psychology.

In summary, the forces that induced change in U.S. accounting academe in the aftermath of World War II still prevail. The goals and methods of accounting research have changed profoundly over the last half century (e.g., Zeff 1978), leading accounting researchers to more Type III error (e.g., Dyckman 1989): “giving the right answer to the wrong problem” (Kimball 1957) or “solving the wrong problem precisely” (Raiffa 1968). To the extent that accounting relevance has been sacrificed for tractability and academic rigor, these changes have slowed accounting-knowledge generation.



 
HOW CAN ACCOUNTING RESEARCH BECOME MORE INNOVATIVE?

Demski (2007) characterizes recent accounting research thus: “Innovation is close to nonexistent. This, in fact, is the basis for the current angst about the ‘diversity' of our major publications. Deeper, though, is the mindset and factory-like mentality that is driving this visible clustering in the journals.” He laments further, “The vast bulk of our published work is insular, largely derivative, and lacking in the variety that is essential for innovation. Arguably, our published work is focusing increasingly on job placement and retention.” Demski et al. (1991) conjecture, “Accounting researchers apparently suffer from insecurity about their field of study, leading them to perturb fairly secure research paradigms (mostly those that have been accepted by economists) within an ever-narrowing circle of accounting academics isolated from the practice world. There is very little reward in the current academic system for experimentation and innovation that has the potential for impacting practice.” My sense is that many accounting researchers (especially those who have not practiced accounting) believe that the conceptual framework has resolved all fundamental accounting issues and that accounting researchers should help regulators fill in the technical details to implement their grand plan. As blinkers keep horses focused on the road ahead, the current conceptual framework blinds accounting academics to the important issues in accounting (especially the many flaws in the conceptual framework project).

Identifying the major unsolved questions in a field can provide new directions for research quests as well as a framework for teaching. For example, Hilbert (1900) posed 23 unsolved problems for mathematicians to test themselves against over the 20th century. His ideas were so successful in directing subsequent mathematics research that $1 million Millennium Prizes have been established for seven unsolved mathematical questions for the current century.6 Many scientific disciplines compile lists of unsolved questions for their fields in an attempt to imitate the success of 20th century mathematics.7 There is even a new series of books titled, The Big Questions: xxx, where xxx is philosophy (Blackburn 2009), physics (Brooks 2010), the universe (Clark 2010), etc. The series “is designed to let renowned experts confront the 20 most fundamental and frequently asked questions in a major branch of science or philosophy.” There is, however, neither consensus nor much interest in addressing the big unanswered questions in accounting, let alone exploring and refining them, recent attempts notwithstanding (e.g., Ball 2008; Basu 2008; Robinson 2007).

Few accounting professors can identify even a dozen of the 88 members of the Accounting Hall of Fame, let alone why they were selected as “having made or are making significant contributions to the advancement of accounting.”8 Since many doctoral syllabi concentrate on recent publications to identify current research frontiers, most recent doctoral graduates have read just a handful of papers published before 2000. This leaves new professors with little clue to the “most fundamental and frequently asked questions” of our discipline. The American Economic Association recently celebrated the centenary of The American Economic Review by appointing a Top 20 Committee to select the “top 20” articles published in the journal over the previous 100 years (Arrow et al. 2011). Similarly, the Financial Analysts Journal picked the best articles over its first 50 years (Harlow 1995). Accounting academics could similarly identify the top 20 articles published in the first 100 years of The Journal of Accountancy (1905–2004), the top 25 articles published in Accountancy (1880–2005), or proportionately fewer papers for The Accounting Review (1926–2011).

If accounting researchers do not tackle the fundamental issues in accounting, we collectively face obsolescence, irrelevance, and oblivion.9 Demski et al. (1991) recommended identifying a “broad set of challenging, relevant research questions” to be distributed to seasoned researchers to develop detailed research proposals that would be presented at a “proposals conference,” with the proceedings distributed widely among accounting academics. Lev (1992) commissioned several veteran researchers, including Michael Brennan (Finance) and Daniel Kahneman (Psychology), to write detailed research proposals on “Why is there a conservatism bias in financial reporting?” Eight proposals were presented at a plenary session of the 1993 AAA Annual Meeting in San Francisco, and copies of the research proposals were included in the packets of all annual meeting attendees. This initiative provided the impetus for conservatism research over the last two decades (cf. Basu 2009).

Continued in article

January 3, 2013 Reply from Bill McCarthy

Hi Bob:

In complaining about the lack of a connection between the Accounting Horizons commentaries and the Pathways Commission, your timing is off.  The commentaries were based on presentations given in May of 2011.  I certainly updated my commentary earlier this year, but the final versions were all submitted months before the release of Pathways at the AAA meeting January in August.  I certainly knew many involved people (especially Julie David, Mark Nittler, and Brian Sommer), but the first time I saw the report was when I picked up my AAA packet in Washington.   If you want to see how to connect my commentary  to Pathways, I would recommend looking at the AAA video from the annual meeting "The Pathways Commission -- Creating Connections ...."  It is available on AAA Commons.  Julie, Cheryl Dunn, and I relate our own work in semantic modeling of accounting phenomena (REA modeling) to practice, teaching, and research as a good example of what academics could be like if Pathways recommendations are taken seriously.  I think the whole video is worth watching, including the Q&A, but of course I was a participant, so you can judge for yourself.  Unfortunately, we could not have Mark in Washington as a fourth participant, but his current ideas were well summarized in the video that Julie showed.  Alternatively, you could look at
:

http://blogs.workday.com/Blog/time_is_right_to_modernize_400_year_old_accounting_practices.html

I suspect that some of the other commentators might have augmented their papers as well, if we were all aware of the full Pathways set of recommendations.  I certainly do not fear Pathways, I am an ardent supporter.  As I say on the video, my only misgivings are associated with the realization that Pathways implementations might cause unreasonable troublemakers (adopted AH terms) like me to prosper.  I am not sure academic accounting could accommodate such a deluge of deliberately wayward behavior in such a short time.

Bill McCarthy
Michigan State

January 3, 2013 reply from Bob Jensen

I think the essays themselves deal very well with issues of research/scholarship diversity and need for innovation. At the same time they are weak with respect to promoting more integration between the profession and researcher/scholars who rarely venture from the campus to discover research most of interest to the profession.


Of course this complaint does not apply to you scholars in AIS. AIS is a bright spot that focuses on problems in AIS in business.


Thanks,
Bob Jensen


"The Myth of Rigorous Accounting Research," by Paul F. Williams, Accounting Horizons, December 2014 ---
http://aaajournals.org/doi/full/10.2308/acch-50880

In this brief paper, I provide an argument that the rigor that allegedly characterizes contemporary mainstream accounting research is a myth. Expanding on arguments provided by West (2003), Gillies (2004), and Williams (1989), I show that the numbers utilized extensively to construct the statistical models that are the central defining feature of rigorous accounting research are, in many cases, not adequate to the task. These numbers are operational numbers that cannot be construed as measures or quantities of any kind of stable property. Constructing elaborate calculative models using operational numbers leads to equations whose results are not clearly decipherable. The rigorous nature of certain preferred forms of accounting research is, thus, largely a matter of appearance and not a substantive quality of the research mode that we habitually label “rigorous.” Thus, the policy recommendations implied by the results of rigorous accounting research may be viewed with considerable skepticism.

. . .

The Form of Rigorous Accounting Research

The distinct type of research that receives accolades for being rigorous is familiar to anyone who is acquainted with the top three academic accounting journals: Journal of Accounting Research (JAR), Journal of Accounting and Economics (JAE), and The Accounting Review (TAR). The fundamental premises underlying this rigorous research are enumerated by McCloskey (1985, 6–7):

  1. Prediction and control is the point of science.
  2. Only the observable implications (or predictions) of a theory matter to its truth.
  3. Observability entails objective, reproducible experiments; mere questionnaires interrogating human subjects are useless, because humans might lie.
  4. If and only if an experimental implication of a theory proves false is the theory proved false.
  5. Objectivity is to be treasured; subjective “observation” (introspection) is not scientific knowledge, because the subjective and the objective cannot be linked.
  6. Kelvin's Dictum: “When you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.”
  7. Introspection, metaphysical belief, aesthetics, and the like may well figure in the discovery of an hypothesis, but cannot figure in its justification; justifications are timeless, and the surrounding community of science irrelevant to their truth.
  8. It is the business of methodology to demarcate scientific reasoning from nonscientific, positive from normative.
  9. A scientific explanation of an event brings the event under a covering law.
  10. Scientists—for instance, economic scientists—ought not to have anything to say as scientists about the oughts of value, whether of morality or art.7
Consistent with this characterization of paradigmatic social science, academic papers in the leading accounting journals consist almost exclusively in the presentation of elaborate statistical models—what Abbott (2004) calls statistical causal analyses. Those that are not statistical models are analytical models of accounting phenomena relying on the calculus of differentiation and integration and, also, the methodology of standard neoclassical economics. Rigorous accounting research is now that which reflects the aforementioned premises of what constitutes good scientific behavior.

To confirm this characterization of rigorous research, the 135 papers published in JAR, JAE, and TAR during 2011 were examined.8 Five of the papers were analytic,9 13 were papers that involved behavioral experiments utilizing ANOVA/ANCOVA designs, one utilized path analysis (a linear modeling), and 116 (86 percent of all articles) produced some form of linear statistical model (e.g., probit, logit). Of the 135 papers, 78 percent pertained to a finance-related topic. Fourteen were management in nature, 11 were auditing in nature, and three were “other.” Of all the papers (regardless of topic), 96 percent involved some kind of linear, statistical causal analysis. The end product of these scholarly efforts is a statistical (mathematical) model conforming to Sen's (1988) engineering approach to economics. These statistical models presume some predictive purpose. Such models must be constructed in the belief that were outcomes of future independent variables entered into the models, the calculative results of the models would produce values for the dependent variable that approximate the actual future value of that variable. Without that belief, it is hard to argue that these models have any power to explain some phenomenon. Without some belief that the relationships expressed in the equations of the model are temporally stable, each model becomes substantially an anecdote.10 Thus, accounting research that qualifies as rigorous by its publication in the premier journals is a process of constructing putatively predictive linear, mathematical models, i.e., algorithmic forms of knowledge of accounting phenomena. The 116 (86 percent of all papers) regression papers published in 2011 utilized an average of 20 variables (dependent and explanatory) in their models; an average of nine variables (43 percent of all variables) were taken from the financial statements of actual firms (mostly from the Compustat database). The frequency of such accounting measures (some taken directly from the financial statements, some based on arithmetic operations performed on financial statement numbers, including further regressions on accounting numbers) ranged from one to 27 in a single regression model. The average number of such variables per equation was seven, while the total number of variables, on average, was 13. In many cases, accounting or accounting-derived numbers represented the dependent variable. That is, the phenomenon being explained is represented via accounting operations so that an accounting number stands in for the real-world phenomenon being explained. Accounting scholars engaged in conducting rigorous research act as if these accounting numbers are particularly useful for analyzing and evaluating accounting phenomena in a rigorous manner. However, there is good reason to believe that the supposition that accounting numbers are useful for rigorous research is invalid.

Ironically, it is current accounting policy making based on the same suppositions as rigorous accounting research (Williams and Ravenscroft 2014) that makes the usefulness of accounting data for performing rigorous accounting research implausible. This is so for two related reasons: the operational nature of accounting numbers (i.e., they are not quantities) and the “clock problem.”


 
THE OPERATIONAL STATUS OF ACCOUNTING NUMBERS

The first problem is the status of accounting numbers as operational numbers, which makes their status as “quantities” suitable for mathematical manipulations problematic. While accountants have described accounting as a “measurement discipline; accountants measure things” (Ijiri 1975; Mock 1976), what it is that is measured, or even whether it can be measured, is not clear. Stamp (1993, 272)11 observes:

Thus despite the common use of the terms “measure” or “measurement” in accounting, it seems to this author that there is no operation whatsoever in accounting that could be described as “measurement,” in the sense used by physicists, or biologists, or geologists, and so on. To measure in science (and, actually, in the everyday sense described above) necessarily involves the physical operation of comparison of the quantity being measured with some standard. (emphasis added)

When the American Institute of Certified Public Accountants (AICPA) attempted to formalize the objectives of accounting, Beaver and Demski (1974, 180) criticized them, noting that “the term income determination is used as if it were some unambiguous, monolithic concept (such as true earnings) devoid of any measurement error.” What Beaver and Demski (1974) illustrated, perhaps without intending such, is that income is a particular type of concept, the kind of concept of which the conceptual language of accounting is largely comprised. Dworkin (2011) identifies three types of concepts that humans employ. According to him, some of the concepts we use are criteria, i.e., “we use the same criteria in identifying instances” (Dworkin 2011, 158). As an example, Dworkin (2011) offers the concept of an equilateral triangle—any triangle with sides of equal length. By measuring the sides with a measuring device (a ruler), we can establish with a high degree of confidence whether a triangle meets the criterion. A second type of concept is natural kind, i.e., “things that have a fixed identity in nature, such as a chemical compound or an animal species, and that people share a natural-kind concept when they use that concept to refer to the same natural kind” (Dworkin 2011, 159). The third type of concept is interpretive, a type Dworkin (2011, 160) describes thus:12

We share these concepts … not because we agree in their application once all other pertinent facts are agreed upon, but rather by manifesting an understanding that their correct application is fixed by the best interpretation of the practices in which they figure.

The role of “interpretation” in human understanding is ubiquitous. Dworkin (2011, 163) states the all pervasiveness of interpretive activity thusly:

Our account of the concepts that structure an intellectual domain is itself an interpretation of that domain, a device for making sense of the inquiry, reflection, arguments, and strategies that mark the domain. So in one sense all concepts are interpretive; because we must interpret the practice of “baldness” to decide that that concept is both criterial and vague, we might say that it is an interpretive fact that it is both.

The concepts that make up the everyday language of accounting are not natural kinds (Searle 1995; Lee 2006), but interpretive all the way down.

For example, accounting depends upon two fundamental concepts: “asset” and “liability,” which make up the accounting identity Assets = Liabilities + Net Assets, the balance sheet equation upon which the entire structure of accounting rests. The concept of “asset” has undergone radical change over the decades as the concept has been interpreted and re-interpreted through changing historical contexts.13 Williams (2003, 134) traces the change in meaning of asset from the late nineteenth century, when it first came to be used widely by accountants: “Toward the end of the 19th century the term assets, which was understood in commerce and law as meaning property available for the payment of debts, began to appear prominently in the accounting literature.” So when “asset” entered the accounting lexicon in a prominent way, the term applied to things with a cash value that could be used to settle debts; i.e., an asset depended upon the interpretive concept of “debts” and what it took to settle them. An asset was conceived as satisfying the criteria of being a separable property that could provide cash14 for settling money-denominated claims. In that incarnation, an asset could be represented numerically by a sales price available at the moment of representation. Consider how radically different our current concept of “asset” is: anything that provides probable future economic benefits (FASB 1980). This conceptualization of asset precludes nothing from being an asset; the criterion “future economic benefit” makes “assetness,” like Dworkin's (2011) “baldness,” more ambiguous and increases the potential for disagreement over whether something actually qualifies as an asset. The current concept also presumes that representation of the present relies on representations of the future, since without knowing about future economic benefits, it is not possible to identify what is an asset now. The time frame for knowing what is an asset and what one of its potential quantitative representations should be has been radically changed. Instead of “if it is worth something in money terms now, it is an asset now,” it is “if it is worth anything in money terms at any time in the future, it is an asset now.” “Asset” as a concept is of a social and not a natural kind, and undergoes re-interpretations that are influenced by social and political changes.15

The problematic for rigorous accounting research addressed in this essay is that asset, so central to accounting, is a concept that eludes any kind of measurement, i.e., there is no metric that provides a quantitative representation of “assetness.” If we could agree that two “things” meet the criteria of assets (and there is no guarantee this will be the case), then it is not possible to determine quantitatively which of the two is more asset-like than the other. Asset is a categorical concept and no more than a categorical concept requiring judgment about which there can be substantial disagreements. We could weigh the two assets and determine which is heavier or compare what they originally cost and determine which was more costly to acquire at the time of acquisition, but there is no metric that allows measurement of the relative amounts of “assetness” possessed by the two assets. They either are or are not assets. Once identified as an asset, all assets are equal so far as their “assetness” is concerned. To the extent that assets are “measurable,” they are so only in terms of other properties, arbitrarily chosen, that they possess, e.g., weight, volume, carbon footprint, cost, “fair value,” but not in terms of a unit of measure endemic to the concept.16 So measurement of assets, liabilities, and, by derivation, everything else contained in financial statements is an arbitrary choice of properties to be “measured.” This makes scientific investigation of accounting much more difficult, since the “science” itself and any meaning attributed to the results of scientific investigation will depend on what arbitrary measures of asset one chooses. And these choices are not scientific judgments, but value judgments.17 Physicists confronted with “time” as a variable in their mathematical models of the natural world face some constraint on how they choose to measure the concept—the concept leads them to choose measurement methods that are constrained by the concept itself and its place in physics theory. Nature will not provide physicists, carte blanche, choice in how to measure time. Asset as a concept does not constrain how we might choose to “measure” it, since it is not a scientific concept (assets do not exist in nature), but one inherited from historical human practices of civic administration and law and, more recently, theories of financial economics.

Thus, accounting can be construed as measurement only in the sense of assigning numbers to an object where there is considerable discretion in what numbers are to be assigned. What is the quantity represented by those numbers is subject to considerable ambiguity because the choice of metric is arbitrary and subject to wide variation in application of measurement “technique.” If we reach a state where the numbers assigned to different assets and different liabilities are arbitrarily different from each other depending on which particular asset and liability we are considering, then the prospect that those numbers are quantities of anything about which we can coherently speak is rather remote. Accounting numbers are not quantities corresponding to physical reality, but numerical representations of interpretive concepts about which there is often significant disagreement. The problematic for rigorous accounting research lies in the ambiguity of just what is the quantity represented by accounting numbers and the implications that has for what the results of archival research can mean.

. . .

CONCLUSION

In Fall 2011, physicists at CERN and the Gran Sasso National Laboratory conducted an experiment that appeared to demonstrate that light speed is not the maximum speed attainable in the universe, contrary to Einstein's claim in the Special Theory of Relativity. Energized neutrinos were accelerated to the point where they appeared to cover the distance between Geneva and Gran Sasso 60 nanoseconds faster than light speed (Powell 2011).29 Sixty nanoseconds is a very, very small quantity (60 billionths of a second). The clock designed to measure such small quantities must indeed be a very precise instrument.30 Accounting numbers, on the other hand, are not so precise because they are operational numbers, not quantities. The rigor attributed to rigorous accounting research comes from the claim that it most closely resembles what the physicists at CERN are doing. But it is analogous to physics mainly in appearance. It is ironic that such research has no policy implications: “Such analysis cannot, however, in and of itself imply or dictate a preference for one reporting practice over another” (Beaver and Demski 1974, 176). The lack of a consistent referent contributes to the fact that although rigorous accounting research is based on Sen's (1988) engineering approach, the success of this model falls far short of the success of actual engineering models. The failure to yield significant replicable findings raises doubts about its exclusive status as the sine qua non of accounting scholarship, as well as the certainty of claims based on such research.31 Rescher (2009, 102) notes that one obstacle to rational prediction is “Fuzziness—data indetermination.” Accounting numbers' operational character makes them inherently fuzzy. Rescher (2009) notes a consequence of our believing that rigorous accounting research is actually rigorous. He notes that in certain cases, extreme precision is essential, and particularly in those cases we characterize as chaotic (Rescher 2009, 59), and the economic world with which accounting deals is just such a chaotic case (Keen 2001; Taleb 2004, 2007; Orrell 2007; Pilkey and Pilkey-Jarvis 2007; Williams and Ravenscroft 2014). According to Rescher (2009, 60):

The precision needed to go from speculation to calculation is simply beyond our reach in such cases. As far as we are concerned, the matter will be a thing of pure chance. We cannot effectively come to know the details of it. In drastic matters, quasi-quantities will impel us into ignorance. (emphasis added)

Given the data and logic problems of current archival research, some greater attention to precision (see Ball 2013, footnote 20) seems in order. In addition, the likely intractability of the data problems suggests this research methodology is not the only path to accounting understanding. It seems we have yet to reach a state of scholarly maturity in accounting that we can afford the categorical dismissal of other methodologies simply on the grounds that they do not appear rigorous enough because they eschew the currently preferred methodology.

In any field of scholarly pursuit, to claim for oneself rigor is to claim for oneself scholarly virtue. However, to claim for oneself exclusive ownership of rigor requires, it seems, much greater certainty about what you are doing than currently exists. Making such a claim denies to others the attribution of scholarly virtue. This smacks of disrespect for other citizens of the academic community. Rigor is actually a process of becoming, not a destination, once arrived at, to be possessed, defended, and occupied forever and ever. Anthony Hopwood (2007, 1,365) began his AAA Presidential Address by observing, “I think there is a growing sense of unease about the state and direction of accounting research.” Hopwood (2007, 1,372) attributed this trouble in the academy to “strong intellectual biases and prejudices.” The consequence of the hegemony of these biases and prejudices “is that accounting is currently left with a research community whose members are, in my view, too conservative, too intellectually constrained, too conformist, and insufficiently excited by and involved with the changing practice or regulation of the craft” (Hopwood 2007). Hopwood (2007) correctly observed that the problem is not one that resides within the members of the research community, but with the community itself, i.e., the institutions that make up that community. The luxury of patience afforded by Kuhnian paradigms in the natural sciences—old paradigms die when their proponents die—is not available to accounting. This is so because the paradigm of rigorous accounting research has been institutionalized and institutions do not die of natural causes. Over time, North American doctoral programs have become more homogeneous, turning out new scholars trained only in the methodology of rigorous accounting research. In spite of the procedural changes at the AAA that give greater voice to its members, the key posts of association journal editors, directors of publications, directors of research, Doctoral Consortium faculty and New Faculty Consortium faculty continue to be selected mainly from the ranks of successful rigorous accounting researchers. Institutions are indeed much more difficult to change. As Hopwood (2007, 1,374) noted, “The difficulties that we face are ones that are deeply embedded in complex institutional structures. Change will not be easy, but it will be more likely to occur if we maintain a dialogue and debate.”

One obvious suggestion for improvement is to acknowledge that accounting is a practice and is, therefore, inherently multi-disciplinary. That suggests a kind of Habermasian (1987) community in which no voice is suppressed and all claims are exposed to critique and defended by “good” reasons, not power. Accounting has a history, and history is a discipline with its own brand of rigorous scholarship. Accounting is about accountability (Soll 2014), so it is about ethics and sociology, which also have brands of rigorous scholarship. Accounting is about law and politics, and law has its own brand of scholarly rigor. So it seems logical that reasonable folks would, therefore, acknowledge the relevance of multiple kinds of scholarship, each of which can give insights into accounting. However, as an observer of the American academy for nearly 40 years, it is difficult for me to be sanguine about such a seemingly reasonable approach. Accounting as a discipline with intellectual ambitions has gotten increasingly narrow and restrained over the past 40 years, in spite of the nearly ceaseless advocacy for multiple perspectives and some minimal level of mutual respect.

This essay is intended as one type of response to Hopwood's (2007) call for dialogue and debate, but it is not dialogue or debate with rigorous accounting research, since having a dialogue or debate with an institution is fruitless. It is, however, an attempt at dialogue and debate with those people who may read it and be persuaded by the argument that rigorous accounting research, by its own alleged standards, is not as rigorous as claimed and, therefore, realize they are free to choose some other way without fear they have abandoned all hope of doing rigorous scholarship. If just one person has the revelation that the claims to rigor ring a bit hollow and decides to pursue some other avenue within the discipline, then this dialogue and debate has been productive. Although perhaps too slowly, institutions can be changed via people withdrawing their support for them and that happens, usually, one person at a time.

Jensen Comment
There are two sometimes differing criteria in accounting research as well as science in general. One criterion is rigor that Paul treats quite well in the above paper. The other is relevance that is alluded to only briefly in the above paper. Accountics science published in leading academic accounting research journals is relevant to doctoral students and faculty seeking publication in those journals. The mathematical and statistical models are poured over for ideas on both extensions of the published research and applications of the models to other data sets and hypotheses.

The problem is that relevance in the above context creates a "cargo cult" first noted by Richard Feyman in physics and noted in various science disciplines, notably the social sciences.

See Below


Metaphorical Meaning of the Phrase "Cargo Cult Science" ---
http://en.wikipedia.org/wiki/Cargo_cult#Metaphorical_uses_of_the_term

The term "cargo cult" has been used metaphorically to describe an attempt to recreate successful outcomes by replicating circumstances associated with those outcomes, although those circumstances are either unrelated to the causes of outcomes or insufficient to produce them by themselves. In the former case, this is an instance of the post hoc ergo propter hoc fallacy.

The metaphorical use of "cargo cult" was popularized by physicist Richard Feynman at a 1974 Caltech commencement speech, which later became a chapter in his book Surely You're Joking, Mr. Feynman!, where he coined the phrase "cargo cult science" to describe activity that had some of the trappings of real science (such as publication in scientific journals) but lacked a basis in honest experimentation.

Later the term cargo cult programming developed to describe computer software containing elements that are included because of successful utilization elsewhere, unnecessary for the task at hand.

Questions
Why does the business world ignore business school academic research including accountics research?
What other academic researchers have become "irrelevant?"

Answer

The authors blamed business schools’ scientifically rigorous research into arcane areas – studies whose theories didn’t have to be proved to work in the real world, only to the academic journals in which they hoped to get published (and, they maintained, on which tenure depended).

The same irrelevancy of academic researchers is taking place in sociology and anthropology.

"Making Business School Research More Relevant," by James C. Wetherbe Jon Eckhardt, Harvard Business School Blog, December 24, 2014 ---
Click Here
https://hbr.org/2014/12/making-business-school-research-more-relevant?utm_source=newsletter_daily_alert&utm_medium=email&utm_campaign=alert_date&cm_ite=DailyAlert-122514+%281%29&cm_lm=rjensen%40trinity.edu&referral=00563&cm_ven=Spop-Email&cm_mmc=email-_-newsletter-_-daily_alert-_-alert_date

 

In a landmark 2005 Harvard Business Review article, USC business professors Warren Bennis and James O’Toole argued that the skills imparted by most business schools were not relevant to students and their eventual employers. The authors blamed business schools’ scientifically rigorous research into arcane areas – studies whose theories didn’t have to be proved to work in the real world, only to the academic journals in which they hoped to get published (and, they maintained, on which tenure depended). Do management professors “believe that the regard of their peers is more important than studying what really matters to executives who can put their ideas into practice?” Bennis and O’Toole wrote. “Apparently so.”

Nearly 10 years after the article was published, we believe this problem is even more acute, and that as such business schools need to get serious about making research more relevant to business. The best way do that is to emulate the world of medical research: conduct research and then put it into practice with real companies.

The rise of rigorous research in business schools has fostered an unfortunate paradox: business schools have become increasingly disconnected from practice. The reason is that business school faculty are almost exclusively rewarded on two metrics. First, they are rewarded for the number of scientific papers that they write that are published in prestigious journals that are exclusively controlled by, and read by, other academics. Second, they are rewarded by their citation count—the number of times their articles are cited by articles from other professors.

These incentives play a powerful role in how business schools are ranked. In fact, professors are often terminated during tenure evaluation if they do not perform well on these two dimensions. These incentives mean business professors spend most of their time searching for research topics they think will interest other business professors, conducting that research, and attending academic conferences to promote their work to other professors and increase citation counts. Professors who attend industry conferences or immerse themselves in the practice of business decrease the chances of performing well on publication and cite counts.

The result of this scholarly activity is a closed system. Business faculty create a body of knowledge that is scientifically novel, interesting, and important. But far too often, the research doesn’t address the real problems of entrepreneurs, managers, investors, marketers, and business leaders.

While many business professors view putting research into practice as incompatible with modern research universities, they only need to look across their campuses to academic medical centers to see that this view is outdated. Medical schools understand that patients are not well served by research driven solely by biologists, chemists, psychologists, and other research faculty who never treat patients.

Academic medical centers integrate research with practice through what the medical community refers to as “translational research.” Translational research takes scientific research conducted in the lab and makes it useful to people. Fully integrated translational research faculty are tenured professors who practice medicine and use the latest scientific techniques to answer questions about those techniques from practicing physicians. In addition, they often coauthor research papers with basic scientists and collaborate on clinical initiatives with clinical faculty.

The work of translational medical scientists means the knowledge production engines of medical schools advance basic science, applied science, and the practice of medicine. Why should business research and business professors be any different?

Five changes would initiate a new era of highly relevant business school research:

To be sure, corporate funding of medical research for some time has been accused of biasing findings in favor of for-profit interests. Corporate-funded business school research has the potential for conflicts of interest as well. But the way to resolve them is through full disclosure of funding sources and high research standards. The academic journal referrees of any business study should look closely at whether its findings and research methodology could have been biased. For their part, researchers must specifically explain how their methodology eliminated such bias.

Getting business professors to change their research agenda requires deans who embrace fundamental institutional change. While such change is never easy, the good news is that business schools have a strong scientific capability to build upon. They only need to apply that capability to issues that are much more relevant to the organizations that will employ their graduates.

Bob Jensen's threads on the how accountics scientists need to change ---
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

There are other disciplines where academic researchers have lost their relevance.

Anthropology Without Science: A new long-range plan for the American Anthropological Association that omits the word �science� from the organization's vision for its future has exposed fissures in the discipline ---
http://www.trinity.edu/rjensen/HigherEdControversies.htm#AntropologyNonScience

"How Sociologists Made Themselves Irrelevant," by Orlando Patterson, Chronicle of Higher Education's Chronicle Review, December 1, 2014 ---
http://chronicle.com/article/How-Sociologists-Made/150249/?cid=at&utm_source=at&utm_medium=en

Early in 2014, President Obama announced a new initiative, My Brother’s Keeper, aimed at alleviating the problems of black youth. Not only did a task force appointed to draw up the policy agenda not include a single professional sociologist, but I could find no evidence that any sociologist was even consulted in the critical first three months of the group’s work, summarized in a report to the president, despite the enormous amount of work sociologists have done on poverty and the problems of black youth.

Sadly, this situation is typical because sociologists have become distant spectators rather than shapers of policy. In the effort to keep ourselves academically pure, we’ve also become largely irrelevant in molding the most important social enterprises of our era.

Continued in article

How Accountics Scientists Made Themselves Irrelevant:  In the effort to keep ourselves academically pure ---
The Cargo Cult of Accounting Research
How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

 

Jensen Comment
Most accountics Cargo Cult scientists are silent and smug with respect to the Pathways Commission Report, especially it's advocacy of clinical research and research methods extending beyond GLM data mining of commercial databases that the AAA leadership itself is admitting has grown stale and lacks innovation ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays

This is a perfect opportunity for me to recall the cargo plane scene from a movie called Mondo Cane ---
http://en.wikipedia.org/wiki/Mondo_cane

 Sudipta Basu
picked up on the Cargo Cult analogy to stagnation of accountics science research over the past few decades.

"How Can Accounting Researchers Become More Innovative? by Sudipta Basu, Accounting Horizons, December 2012, Vol. 26, No. 4, pp. 851-87 ---
http://aaajournals.org/doi/full/10.2308/acch-10311 


 

We fervently hope that the research pendulum will soon swing back from the narrow lines of inquiry that dominate today's leading journals to a rediscovery of the richness of what accounting research can be. For that to occur, deans and the current generation of academic accountants must give it a push.�
Michael H. Granof and Stephen A. Zeff (2008)


 

Rather than clinging to the projects of the past, it is time to explore questions and engage with ideas that transgress the current accounting research boundaries. Allow your values to guide the formation of your research agenda. The passion will inevitably follow �
Joni J. Young (2009)

. . .

Is Academic Accounting a “Cargo Cult Science”?

In a commencement address at Caltech titled “Cargo Cult Science,” Richard Feynman (1974) discussed “science, pseudoscience, and learning how not to fool yourself.” He argued that despite great efforts at scientific research, little progress was apparent in school education. Reading and mathematics scores kept declining, despite schools adopting the recommendations of experts. Feynman (1974, 11) dubbed fields like these “Cargo Cult Sciences,” explaining the term as follows:

In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same things to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he's the controller—and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.

Feynman (1974) argued that the key distinction between a science and a Cargo Cult Science is scientific integrity: “[T]he idea is to give all of the information to help others judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.” In other words, papers should not be written to provide evidence for one's hypothesis, but rather to “report everything that you think might make it invalid.” Furthermore, “you should not fool the layman when you're talking as a scientist.”

Even though more and more detailed rules are constantly being written by the SEC, FASB, IASB, PCAOB, AICPA, and other accounting experts (e.g., Benston et al. 2006), the number and severity of accounting scandals are not declining, which is Feynman's (1969) hallmark of a pseudoscience. Because accounting standards often reflect standard-setters' ideology more than research into the effectiveness of different alternatives, it is hardly surprising that accounting quality has not improved. Even preliminary research findings can be transformed journalistically into irrefutable scientific results by the political process of accounting standard-setting. For example, the working paper results of Frankel et al. (2002) were used to justify the SEC's longstanding desire to ban non-audit services in the Sarbanes-Oxley Act of 2002, even though the majority of contemporary and subsequent studies found different results (Romano 2005). Unfortunately, the ability to bestow status by invitation to select conferences and citation in official documents (e.g., White 2005) may let standard-setters set our research and teaching agendas (Zeff 1989). Academic Accounting and the “Cult of Statistical Significance”

Ziliak and McCloskey (2008) argue that, in trying to mimic physicists, many biologists and social scientists have become devotees of statistical significance, even though most articles in physics journals do not report statistical significance. They argue that statistical tests are typically used to infer whether a particular effect exists, rather than to measure the magnitude of the effect, which usually has more practical import. While early empirical accounting researchers such as Ball and Brown (1968) and Beaver (1968) went to great lengths to estimate how much extra information reached the stock market in the earnings announcement month or week, subsequent researchers limited themselves to answering whether other factors moderated these effects. Because accounting theories rarely provide quantitative predictions (e.g., Kinney 1986), accounting researchers perform nil hypothesis significance testing rituals, i.e., test unrealistic and atheoretical null hypotheses that a particular coefficient is exactly zero.15 While physicists devise experiments to measure the mass of an electron to the accuracy of tens of decimal places, accounting researchers are still testing the equivalent of whether electrons have mass. Indeed, McCloskey (2002) argues that the “secret sins of economics” are that economics researchers use quantitative methods to produce qualitative research outcomes such as (non-)existence theorems and statistically significant signs, rather than to predict and measure quantitative (how much) outcomes.

Practitioners are more interested in magnitudes than existence proofs, because the former are more relevant in decision making. Paradoxically, accounting research became less useful in the real world by trying to become more scientific (Granof and Zeff 2008). Although every empirical article in accounting journals touts the statistical significance of the results, practical significance is rarely considered or discussed (e.g., Lev 1989). Empirical articles do not often discuss the meaning of a regression coefficient with respect to real-world decision variables and their outcomes. Thus, accounting research results rarely have practical implications, and this tendency is likely worst in fields with the strongest reliance on statistical significance such as financial reporting research.

Ziliak and McCloskey (2008) highlight a deeper concern about over-reliance on statistical significance—that it does not even provide evidence about whether a hypothesis is true or false. Carver (1978) provides a memorable example of drawing the wrong inference from statistical significance:

What is the probability of obtaining a dead person (label this part D) given that the person was hanged (label this part H); this is, in symbol form, what is P(D|H)? Obviously, it will be very high, perhaps 0.97 or higher. Now, let us reverse the question. What is the probability that a person has been hanged (H), given that the person is dead (D); that is, what is P(H|D)? This time the probability will undoubtedly be very low, perhaps 0.01 or lower. No one would be likely to make the mistake of substituting the first estimate (0.97) for the second (0.01); that is, to accept 0.97 as the probability that a person has been hanged given that the person is dead. Even though this seems to be an unlikely mistake, it is exactly the kind of mistake that is made with interpretations of statistical significance testing—by analogy, calculated estimates of P(D|H) are interpreted as if they were estimates of P(H|D), when they clearly are not the same.

As Cohen (1994) succinctly explains, statistical tests assess the probability of observing a sample moment as extreme as observed conditional on the null hypothesis being true, or P(D|H0), where D represents data and H0 represents the null hypothesis. However, researchers want to know whether the null hypothesis is true, conditional on the sample, or P(H0|D). We can calculate P(H0|D) from P(D|H0) by applying Bayes' theorem, but that requires knowledge of P(H0), which is what researchers want to discover in the first place. Although Ziliak and McCloskey (2008) quote many eminent statisticians who have repeatedly pointed out this basic logic, the essential point has not entered the published accounting literature.

In my view, restoring relevance to mathematically guided accounting research requires changing our role model from applied science to engineering (Colander 2011).16 While science aims at finding truth through application of institutionalized best practices with little regard for time or cost, engineering seeks to solve a specific problem using available resources, and the engineering method is “the strategy for causing the best change in a poorly understood or uncertain situation within the available resources” (Koen 2003). We should move to an experimental approach that simulates real-world applications or field tests new accounting methods in particular countries or industries, as would likely happen by default if accounting were not monopolized by the IASB (Dye and Sunder 2001). The inductive approach to standard-setting advocated by Littleton (1953) is likely to provide workable solutions to existing problems and be more useful than an axiomatic approach that starts from overly simplistic first principles.

To reduce the gap between academe and practice and stimulate new inquiry, AAA should partner with the FEI or Business Roundtable to create summer, semester, or annual research internships for accounting professors and Ph.D. students at corporations and audit firms.17 Accounting professors who have served as visiting scholars at the SEC and FASB have reported positively about their experience (e.g., Jorgensen et al. 2007), and I believe that such practice internships would provide opportunities for valuable fieldwork that supplements our experimental and archival analyses. Practice internships could be an especially fruitful way for accounting researchers to spend their sabbaticals.

Another useful initiative would be to revive the tradition of The Accounting Review publishing papers that do not rely on statistical significance or mathematical notation, such as case studies, field studies, and historical studies, similar to the Journal of Financial Economics (Jensen et al. 1989).18 A separate editor, similar to the book reviews editor, could ensure that appropriate criteria are used to evaluate qualitative research submissions (Chapman 2012). A co-editor from practice could help ensure that the topics covered are current and relevant, and help reverse the steep decline in AAA professional membership. Encouraging diversity in research methods and topics is more likely to attract new scholars who are passionate and intrinsically care about their research, rather than attracting only those who imitate current research fads for purely instrumental career reasons.

The relevance of accounting journals can be enhanced by inviting accomplished guest authors from outside accounting. The excellent April 1983 issue of The Accounting Review contains a section entitled “Research Perspectives from Related Disciplines,” which includes essays by Robert Wilson (Decision Sciences), Michael Jensen and Stephen Ross (Finance and Economics), and Karl Weick (Organizational Behavior) that were based on invited presentations at the 1982 AAA Annual Meeting. The thought-provoking essays were discussed by prominent accounting academics (Robert Kaplan, Joel Demski, Robert Libby, and Nils Hakansson); I still use Jensen (1983) to start each of my Ph.D. courses. Academic outsiders bring new perspectives to familiar problems and can often reframe them in ways that enable solutions (Tullock 1966).

I still lament that no accounting journal editor invited the plenary speakers—Joe Henrich, Denise Schmandt-Besserat, Michael Hechter, Eric Posner, Robert Lucas, and Vernon Smith—at the 2007 AAA Annual Meeting to write up their presentations for publication in accounting journals. It is rare that Nobel Laureates and U.S. Presidential Early Career Award winners address AAA annual meetings.20 I strongly urge that AAA annual meetings institute a named lecture given by a distinguished researcher from a different discipline, with the address published in The Accounting Review. This would enable cross-fertilization of ideas between accounting and other disciplines. Several highly cited papers published in the Journal of Accounting and Economics were written by economists (Watts 1998), so this initiative could increase citation flows from accounting journals to other disciplines.

HOW CAN WE MAKE U.S. ACCOUNTING JOURNALS MORE READABLE AND INTERESTING?

Even the greatest discovery will have little impact if other people cannot understand it or are unwilling to make the effort. Zeff (1978) says, “Scholarly writing need not be abstruse. It can and should be vital and relevant. Research can succeed in illuminating the dark areas of knowledge and facilitating the resolution of vexing problems—but only if the report of research findings is communicated to those who can carry the findings further and, in the end, initiate change.” If our journals put off readers, then our research will not stimulate our students or induce change in practice (Dyckman 1989).

Michael Jensen (1983, 333–334) addressed the 1982 AAA Annual Meeting saying:

Unfortunately, there exists in the profession an unwarranted bias toward the use of mathematics even in situations where it is unproductive or useless. One manifestation of this is the common use of the terms “rigorous” or “analytical” or even “theoretical” as identical with ‘‘mathematical.” None of these links is, of course, correct. Mathematical is not the same as rigorous, nor is it the same as analytical or theoretical. Propositions can be logically rigorous without being mathematical, and analysis does not have to take the form of symbols and equations. The English sentence and paragraph will do quite well for many analytical purposes. In addition, the use of mathematics does not prevent the commission of errors—even egregious ones.

Unfortunately, the top accounting journals demonstrate an increased “tyranny of formalism” that “develops when mathematically inclined scholars take the attitude that if the analytical language is not mathematics, it is not rigorous, and if a problem cannot be solved with the use of mathematics, the effort should be abandoned” (Jensen 1983, 335). Sorter (1979) acidly described the transition from normative to quantitative research: “the golden age of empty blindness gave way in the sixties to bloated blindness calculated to cause indigestion. In the sixties, the wonders of methodology burst upon the minds of accounting researchers. We entered what Maslow described as a mean-oriented age. Accountants felt it was their absolute duty to regress, regress and regress.” Accounting research increasingly relies on mathematical and statistical models with highly stylized and unrealistic assumptions. As Young (2006) demonstrates, the financial statement “user” in accounting research and regulation bears little resemblance to flesh-and-blood individuals, and hence our research outputs often have little relevance to the real world.

Figure 1 compares how frequently accountants and members of ten other professions are cited in The New York Times in the late 1990s (Ellenberg 2000). These data are juxtaposed with the numbers employed in each profession during 1996 using U.S. census data. Accountants are cited less frequently relative to their numbers than any profession except computer programmers. One possibility is that journalists cannot detect anything interesting in accounting journals. Another possibility is that university public relations staffs are consistently unable to find an interesting angle in published accounting papers that they can pitch to reporters. I have little doubt that the obscurantist tendencies in accounting papers make it harder for most outsiders to understand what accounting researchers are saying or find interesting.

Accounting articles have also become much longer over time, and I am regularly asked to review articles with introductions that are six to eight pages long, with many of the paragraphs cut-and-pasted from later sections. In contrast, it took Watson and Crick (1953) just one journal page to report the double-helix structure of DNA. Einstein (1905) took only three journal pages to derive his iconic equation E = mc2. Since even the best accounting papers are far less important than these classics of 20th century science, readers waste time wading through academic bloat (Sorter 1979). Because the top general science journals like Science and Nature place strict word limits on articles that differ by the expected incremental contribution, longer scientific papers signal better quality.21 Unfortunately, accounting journals do not restrict length, which encourages bloated papers. Another driver of length is the aforementioned trend toward greater rigor in the review process (Ellison 2002).

My first suggestion for making published accounting articles less tedious and boring is to impose strict word limits and to revive the “Notes” sections for shorter contributions. Word limits force authors to think much harder about how to communicate their essential ideas succinctly and greatly improve writing. Similarly, I would encourage accounting journals to follow Nature and provide guidelines for informative abstracts.22 A related suggestion is to follow the science journals, and more recently, The American Economic Review, by introducing online-only appendices to report the lengthy robustness sections that are demanded by persnickety reviewers.23 In addition, I strongly encourage AAA journals to require authors to post online with each journal article the data sets and working computer code used to produce all tables as a condition for publication, so that other independent researchers can validate and replicate their studies (Bernanke 2004; McCullough and McKitrick 2009).24 This is important because recent surveys of science and management researchers reveal that data fabrication, data falsification, and other violations in published studies is far from rare (Martinson et al. 2005; Bedeian et al. 2010).

I also urge that authors report results graphically rather than in tables, as recommended by numerous statistical experts (e.g., Tukey 1977; Chambers et al. 1983; Wainer 2009). For example, Figure 2 shows how the data in Figure 1 can be displayed more effectively without taking up more page space (Gelman et al. 2002). Scientific papers routinely display results in figures with confidence intervals rather than tables with standard errors and p-values, and accounting journals should adopt these practices to improve understandability. Soyer and Hogarth (2012) show experimentally that even well-trained econometricians forecast more slowly and inaccurately when given tables of statistical results than when given equivalent scatter plots. Most accounting researchers cannot recognize the main tables of Ball and Brown (1968) or Beaver (1968) on sight, but their iconic figures are etched in our memories. The figures in Burgstahler and Dichev (1997) convey their results far more effectively than tables would. Indeed, the finance professoriate was convinced that financial markets are efficient by the graphs in Fama et al. (1969), a highly influential paper that does not contain a single statistical test! Easton (1999) argues that the 1990s non-linear earnings-return relation literature would likely have been developed much earlier if accounting researchers routinely plotted their data. Since it is not always straightforward to convert tables into graphs (Gelman et al. 2002), I recommend that AAA pay for new editors of AAA journals to take courses in graphical presentation.

I would also recommend that AAA award an annual prize for the best figure or graphic in an accounting journal each year. In addition to making research articles easier to follow, figures ease the introduction of new ideas into accounting textbooks. Economics is routinely taught with diagrams and figures to aid intuition—demand and supply curves, IS-LM analysis, Edgeworth boxes, etc. (Blaug and Lloyd 2010). Accounting teachers would benefit if accounting researchers produced similar education tools. Good figures could also be used to adorn the cover pages of our journals similar to the best science journals; in many disciplines, authors of lead articles are invited to provide an illustration for the cover page. JAMA (Journal of the American Medical Association) reproduces paintings depicting doctors on its cover (Southgate 1996); AAA could print paintings of accountants and accounting on the cover of The Accounting Review, perhaps starting with those collected in Yamey (1989). If color printing costs are prohibitive, we could imitate the Journal of Political Economy back cover and print passages from literature where accounting and accountants play an important role, or even start a new format by reproducing cartoons illustrating accounting issues. The key point is to induce accountants to pick up each issue of the journal, irrespective of the research content.

I think that we need an accounting journal to “fill a gap between the general-interest press and most other academic journals,” similar to the Journal of Economics Perspectives (JEP).25 Unlike other economics journals, JEP editors and associate editors solicit articles from experts with the goal of conveying state-of-the-art economic thinking to non-specialists, including students, the lay public, and economists from other specialties.26 The journal explicitly eschews mathematical notation or regression results and requires that results be presented either graphically or as a table of means. In response to the question “List the three economics journals (broadly defined) that you read most avidly when a new issue appears,” a recent survey of U.S. economics professors found that Journal of Economics Perspectives was their second favorite economics journal (Davis et al. 2011), which suggests that an unclaimed niche exists in accounting. Although Accounting Horizons could be restructured along these lines to better reach practitioners, it might make sense to start a new association-wide journal under the AAA aegis.

 

CONCLUSION

I believe that accounting is one of the most important human innovations. The invention of accounting records was likely indispensable to the emergence of agriculture, and ultimately, civilization (e.g., Basu and Waymire 2006). Many eminent historians view double-entry bookkeeping as indispensable for the Renaissance and the emergence of capitalism (e.g., Sombart 1919; Mises 1949; Weber 1927), possibly via stimulating the development of algebra (Heeffer 2011). Sadly, accounting textbooks and the top U.S. accounting journals seem uninterested in whether and how accounting innovations changed history, or indeed in understanding the history of our current practices (Zeff 1989).

In short, the accounting academy embodies a “tragedy of the commons” (Hardin 1968) where strong extrinsic incentives to publish in “top” journals have led to misdirected research efforts. As Zeff (1983) explains, “When modeling problems, researchers seem to be more affected by technical developments in the literature than by their potential to explain phenomena. So often it seems that manuscripts are the result of methods in search of questions rather than questions in search of methods.” Solving common problems requires strong collective action by the social network of accounting researchers using self-governing mechanisms (e.g., Ostrom 1990, 2005). Such initiatives should occur at multiple levels (e.g., school, association, section, region, and individual) to have any chance of success.

While accounting research has made advances in recent decades, our collective progress seems slow, relative to the hard work put in by so many talented researchers. Instead of letting financial economics and psychology researchers and accounting standard-setters choose our research methods and questions, we should return our focus to addressing fundamental issues in accounting. As important, junior researchers should be encouraged to take risks and question conventional academic wisdom, rather than blindly conform to the party line. For example, the current FASB–IASB conceptual framework “remains irreparably flawed” (Demski 2007), and accounting researchers should take the lead in developing alternative conceptual frameworks that better fit what accounting does (e.g., Ijiri 1983; Ball 1989; Dickhaut et al. 2010). This will entail deep historical and cross-cultural analyses rather than regression analyses on machine-readable data. Deliberately attacking the “fundamental and frequently asked questions” in accounting will require innovations in research outlooks and methods, as well as training in the history of accounting thought. It is shameful that we still cannot answer basic questions like “Why did anyone invent recordkeeping?” or “Why is double-entry bookkeeping beautiful?”


Bravo to Professor Basu for having the guts address the Cargo Cult in this manner!
 

"David Ginsberg, chief data scientist at SAP, said communication skills are critically important in the field, and that a key player on his big-data team is a “guy who can translate Ph.D. to English. Those are the hardest people to find.”
James Willhite (see below)

Might we also say the same thing about accountics scientists slaving over their enormous purchased "big data" databases?

"Getting Started in "Big Data"," by James Willhite, The Wall Street Journal, February 4, 2013 ---
http://blogs.wsj.com/cfo/2014/02/04/getting-started-in-big-data/?mod=djemCFO_h

Wanted: Ph.D.-level statistician with the technical skill to use data-visualization software and a deep understanding of the _____ industry.

Fill in the blank with almost any business: consumer products, entertainment, health care, semiconductors or fast food. The list reflects the growing range of companies trying to mine mountains of data in hopes of improving product design, supply chains, customer service or other operations.

. . .

At the most basic level, big data is the art and science of collecting and combing through vast amounts of information for insights that aren’t apparent on a smaller scale. Financial executives who want to harness big data face a critical hurdle: Finding people who can glean it, understand it, and translate it into plain English.

The field is so new that the U.S. Bureau of Labor Statistics doesn’t yet have a classification for data scientists, according to BLS economist Sara Royster. That makes it tough to estimate the unemployment rate or salaries for job seekers in the field.

But executives and recruiters, who compete for talent in the nascent specialty, point to hiring strategies that can get a big-data operation off the ground. They say they look for specific industry experience, poach from data-rich rivals, rely on interview questions that screen out weaker candidates and recommend starting with small projects.

David Ginsberg, chief data scientist at business-software maker SAP AG , said communication skills are critically important in the field, and that a key player on his big-data team is a “guy who can translate Ph.D. to English. Those are the hardest people to find.”

Along with the ability to explain their findings, data scientists need to have a proven record of being able to pluck useful information from data that often lack an obvious structure and may even come from a dubious source. This expertise doesn’t always cut across industry lines. A scientist with a keen knowledge of the entertainment industry, for example, won’t necessarily be able to transfer his skills to the fast-food market.

Some candidates can make the leap. Wolters Kluwers NV, a Netherlands-based information-services provider, has had some success in filling big-data jobs by recruiting from other, data-rich industries, such as financial services. “We have found tremendous success with going to alternative sources and looking at different businesses and saying, ‘What can you bring into our business?’ ” said Kevin Entricken, the company’s chief financial officer.

The trick, some experts say, is finding a candidate steeped in higher mathematics with hands-on familiarity with a particular business. “When you have all those Ph.D.s in a room, magic doesn’t necessarily happen because they may not have the business capability,” said Andy Rusnak, a senior executive for the Americas in Ernst & Young’s advisory practice.

Companies can hamstring themselves in big-data projects by thinking too long term, Mr. Rusnak said. They should focus instead on what they can discover in an eight- to 10-week period, he said, and think less about business transformation.

Dunkin’ Brands Group Inc. aims to wring all the value it can out of its data, by using it to entice customers to visit its stores more often and try new doughnuts and drinks. Last week, it went national with a loyalty program that will allow it to harvest data on customer habits.

The program allows the company to target individuals who opt into the program with specific offers aimed at making them more frequent customers. “If you’ve only been coming in the morning, perhaps we’d give you an offer for the afternoon,” said Dunkin’ Chief Information Officer Jack Clare.

Netflix’s Mr. Amatriain said, “I like to face candidates with real practical problems.” He said he will say to an applicant, “You have this data that comes from our users. How can you use it to solve this particular problem? How would you turn it into an algorithm that would recommend movies?” He said that the question is deliberately open-ended, forcing candidates to prove that they can understand not only the math, but what he calls “the big picture approach to using big data to gain insights.”

Jensen Comment
If accountics scientists are to accomplish the above they will have to abandoned their comfortable Cargo Cust isolation form the real world ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

 

 


"Academic Research With Mass Appeal," by Erin Zlome, Bloomberg Business Week, January 28, 2013 ---
http://www.businessweek.com/articles/2013-01-28/academic-research-with-mass-appeal

Business professors are great at writing jargon-filled, hard-to-digest research papers. But every once and a while, they knock it out of the park with the general public. A small pool of research achieved such blockbuster status in 2012 by becoming the most read, most downloaded, or most written-about pieces authored by professors at top business schools. Tax evasion, finding a job, and the benefits of teaching employees Spanish are some of the topics that got non-students reading.

At Harvard Business School, an excerpt from Clayton Christensen’s book How Will You Measure Your Life? was the year’s most read preview of forthcoming research. The passage uses the downfall of Blockbuster and the rise of Netflix (NFLX) as an analogy for how we may end up paying a high cost for small decisions.

Continued in article

MIT, like Harvard, places enormous value on having both feet planted in the real world

The professions of architecture, engineering, law, and medicine are heavily dependent upon the researchers in universities who focus on needs for research on the problems of practitioners working in the real world.

If accountics scientists want to change their ways and focus more on problems of the accounting practitioners working in the real world, one small step that can be taken is to study the presentations scheduled for a forthcoming MIT Sloan School Conference.

Financial Education Daily, May 2012 ---
http://paper.li/businessschools?utm_source=subscription&utm_medium=email&utm_campaign=paper_sub

Learning best practice from the best practitioners

MIT Sloan invites more than 400 of the world’s finest leaders to campus every year. The most anticipated of these visits are the talks given as part of the Dean’s Innovative Leader Series, which features the most dynamic movers and shakers of our day.

At a school that places enormous value on having both feet planted in the real world, the Dean’s Innovative Leader Series is a powerful learning tool. Students have the rare privilege of engaging in frank and meaningful discussions with the leaders who are shaping the present and future marketplace.

Bob Jensen's threads on other steps that should be taken by accountics scientists to become more focused on the needs of the profession ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

 

 




The Sad State of Economic Theory and Research

"The End of Economists' Imperialism'," by Justin Fox, Harvard Business Review Blog, January 4, 2013 --- Click Here
http://blogs.hbr.org/fox/2013/01/the-end-of-economists-imper.html?referral=00563&cm_mmc=email-_-newsletter-_-daily_alert-_-alert_date&utm_source=newsletter_daily_alert&utm_medium=email&utm_campaign=alert_date 

"By almost any market test, economics is the premier social science," Stanford University economist Edward Lazear wrote just over a decade ago. "The field attracts the most students, enjoys the attention of policy-makers and journalists, and gains notice, both positive and negative, from other scientists."

Lazear went on to describe how economists, with the University of Chicago's Gary Becker leading the way, had been running roughshod over the other social sciences — using economic tools to study crime, the family, accounting, corporate management, and countless other not strictly economic topics. "Economic imperialism" was the name he gave to this phenomenon (and to his article, which was published in the February 2000 issue of the Quarterly Journal of Economics). And in his view it was a benevolent reign. "The power of economics lies in its rigor," he wrote. "Economics is scientific; it follows the scientific method of stating a formal refutable theory, testing theory, and revising the theory based on the evidence. Economics succeeds where other social scientists fail because economists are willing to abstract."

Triumphalism like that calls for a comeuppance, of course. So, as the nation's (and a lot of the world's) economists gather this weekend in San Diego for their annual hoedown, it's worth asking: Are there any signs that the imperialist era of economics might finally be coming to an end?

Lazear acknowledged one such indicator in his article — the invasion of economics by psychological teachings about cognitive bias. Two years later, in 2002, the co-leader of that invasion, Princeton psychology professor Daniel Kahneman, won an economics Nobel (the other co-leader, Amos Tversky, had died in 1996). But while behavioral economics has since solidified its status as an important part of the discipline, it hasn't come close to conquering it. On the really big questions — how to run the economy, for example — the mainstream view described by Lazear has continued to dominate. Economists have also continued their imperialist habit of delving into other fields: 2005's Freakonomics, co-authored by Becker disciple Steven Levitt, was a prime example of this — and sold millions of copies. As for Lazear, he got himself appointed chairman of President George W. Bush's Council of Economic Advisers in 2006.

And then, well, things didn't go so well. The financial crisis and subsequent economic downturn — which Lazear somewhat infamously downplayed while in office — have put a big dent in the credibility of the macro side of the discipline. The issue isn't that economists have nothing interesting to say about the crisis. It's that they have so many different things to say about it. As MIT financial economist Andrew Lo found after reading 11 accounts of the crisis by academic economists (along with nine by journalists, plus former Treasury Secretary Hank Paulson's personal account), there is massive disagreement not just on why the crisis happened but on what actually happened. "Many of us like to think of financial economics as a science," Lo wrote, "but complex events like the financial crisis suggest that this conceit may be more wishful thinking than reality."

Part of the issue is that Lazear's description of the scientific way in which economics supposedly works (state a theory, test it, revise) doesn't really apply in the case of a once-in-a-lifetime financial crisis. I tend to think it doesn't apply for macroeconomics in general. As economist Paul Samuelson is said to have said, "We have but one sample of history." Meaning that you can never get truly scientific answers out of GDP or unemployment numbers.

That's why Lord Robert Skidelsky recommended a couple of years ago that while microeconomists could be allowed to proceed along pretty much the same statistical and mathematical path they'd been following, graduate education in macroeconomics needed to be dramatically revamped and supplemented with instruction in ethics, philosophy, and politics.

I'm not aware of this actually happening in any top economics PhD program (let me know if I'm wrong), despite the efforts of George Soros's Institute for New Economic Thinking and others. What I've noticed instead, though, is an increasing confidence and boldness among those who study economic issues through the lens of other academic disciplines.

A couple of years ago I spent a weekend with a bunch of business historians and came away impressed mainly by how embattled most of them felt. Lately, though, I've found myself talking to and reading a little of the work of sociologists and political scientists, and coming away impressed with how adept they are in quantitative methods, how knowledgeable they are about economics, and how willing they are to challenge economic orthodoxy. The two main writings I'm thinking about were unpublished drafts that will be coming out later in HBR and from the HBR Press, so I don't have links — but I get the sense that there are a lot of good examples out there, and that after years of looking mainly to mainstream economics journals I should be broadening my scope. (Two recommendations I've gotten from Harvard government professor Dan Carpenter: Capitalizing on Crisis: The Political Origins of the Rise of Finance, by Sociologist Greta Krippner, and The New Global Rulers: The Privatization of Regulation in the World Economy, by political scientists Tim Büthe and Walter Mattli.)

Even anthropology, that most downtrodden of the social sciences, has been encroaching on economists' turf. When a top executive at the world's largest asset manager (Peter Fisher of BlackRock) lists Debt: The First 5,000 Years by anthropologist (and Occupy Wall Streeter) David Graeber as one of his top reads of 2012, you know something's going on.

Continued in article

Jensen Comment
Harvard's Justin Fox was an Plenary Speaker at the 2011 American Accounting Association Annual Meetings.
Those readers who have access to the AAA Commons may view his video at
http://commons.aaahq.org/posts/7bdb75d3d2


Forwarded by Jim Martin

An interesting controversy in economics sounds familiar.

According to Ronald Coase, it is time to reengage the severely impoverished
field of economics with the economy. He is a 101 year old Nobel Laureate in
economics and professor emeritus at the University of Chicago Law School. He
and Ning Wang of Arizona State University are launching a new journal,
Man and the Economy.

Coase, R. and N. Wang. 2012. Saving economics from the economists.
arvard
Business Review (December 2012): 36.
http://www.businessweek.com/articles/2012-11-29/urging-economists-to-step-away-from-the-blackboard

January 6, 2012 reply from Bob Jensen

An economist once said that he hated the physical scientists because they stole all the easy research problems.

In a sense this is so true in one context. The earth does not change its rotation speed and path just because that speed and path are discovered by research. But people and social cohorts often change just because their behaviors are discovered by researcers.

Physical systems like gravity do not change with understanding of their behavior. Social and economic systems change with discovery. For example, economic and computer networking systems that work great in theory and initially become corrupted as smart folks learn how to exploit the systems.

Hence in social science we must not only discover behavior but discover behavior that changes because we discover that behavior and discover behavior that changes because we discover the changes in behavior and so on and so on.

Except for quantum physics it must be nice to be a physical scientist doing research on stationary systems. One reason mathematics of the physical sciences fails us when extended to economics and the social sciences in general is that these sciences entail nonstationary systems. Equilibrium conditions are seldom are reached. This, for example, is why Malthus was correct for an eye blink in astronomical time.

Respectfully,
Bob Jensen

"Urging Economists to Step Away From the Blackboard," by Brendan Greeley, Bloomberg Business Week, November 29, 2012 ---
http://www.businessweek.com/articles/2012-11-29/urging-economists-to-step-away-from-the-blackboard

Ronald Coase published his career-making paper, “The Nature of the Firm,” 75 years ago. He won the Nobel prize for economics in 1991. In a lecture in 2002, he argued that physics has moved beyond the assumptions of Isaac Newton, and biology beyond Darwin. (Not that he knew them.) But economics, he said, had failed to advance past the efficient-market assumptions of Adam Smith. This year Coase, a professor emeritus at the University of Chicago Law School, is attempting to start a new academic journal ambitiously titled Man and the Economy. The premise: Economics is broken. Coase’s journal is still just a plan, but his frustration with orthodox economics has energized his followers.

The financial crisis forced economists to confront the limitations of their profession. Former Federal Reserve Chairman Alan Greenspan admitted as much when he told Congress in October 2008 that markets might not regulate themselves after all. Coase says the problem runs deeper: Economists study abstractions and numbers, instead of firms and people. He doesn’t believe this can be fixed by tweaking models. An entire generation of economists must be encouraged to think differently.

The idea for the journal stems from his collaboration with Ning Wang, an assistant professor at the School of Politics and Global Studies at Arizona State University who grew up in a rice- and fish-farming village in the Hubei province of China. Coase, 101, began working with Wang in the 1990s at the University of Chicago. Neither has a degree in economics; the two understood each other. “We’re not constrained by a mainstream, orthodox view,” says Wang. “A lot of people would see this as a weakness.” Coase declined to be interviewed.

When Coase and Wang hosted a conference on China in 2008, they noticed that many Chinese academics had never talked to either policymakers or entrepreneurs from their own country. They had learned only what Coase calls “blackboard economics,” sets of theories and mathematical relationships between bits of data. “I came from China,” says Wang. “We have a lot of nationals come here; they’re taught game theory and econometrics. Then they’re going home … without a basic understanding of how the real world functions.”

In an essay published on Nov. 20 in Harvard Business Review, Coase argues that in the early 20th century, economists began to focus on relationships among statistical measures, rather than problems that firms have with production or people have with decisions. Economists began writing for each other, instead of for other disciplines or for the business community. “It is suicidal for the field to slide into a hard science of choice,” Coase writes in HBR, “ignoring the influences of society, history, culture, and politics on the working of the economy.” (By “choice,” he means ever more complex versions of price and demand curves.) Most economists, he argues, work with measures like gross domestic product and the unemployment rate that are too removed from how businesses actually work.

The solution for Coase and Wang is a journal that presents case studies, historical comparisons, and qualitative data—not just numbers but ideas, too. In top economics journals, says Wang, “people think as long as you have a big data set, that’s enough. You can do all kinds of modeling and regression, and it looks scientific enough.” Julie Nelson, chairwoman of the economics department at the University of Massachusetts Boston says economists want the kind of immutable laws that physicists operate under. But Adam Smith’s 1776 idea that people are driven by self-interest is not the same as the law of gravity. “Ask an economist if they’d like to be thought of as a sociologist,” she says, “and they’ll look at you with terror in their eyes.”

Christopher Sims, a professor at Princeton University who won the Nobel prize last year for his work in macroeconomics, recognizes the problem. “We’re always abstracting and hoping that the resulting abstractions capture enough of the truth so that we know what’s going on,” he says. The kind of work that Coase and Wang are interested in, he says, is “not fashionable now. It’s hard to make it a science.” Where Coase and Wang see too little demand for new ideas, Sims sees too little supply. Both he and Nelson, who studies how economics is taught, describe a process at graduate schools that selects for economists inclined to focus on abstract modeling.

Continued in article

Bob Jensen's threads on accounting theory are at
http://www.trinity.edu/rjensen/Theory01.htm



Academic Accounting Inventors Are Rare

This is an award-winning clinical academic accounting research contribution to the profession of accountancy. It is totally within the spirit of the Pathways Commission initiatives ---
http://commons.aaahq.org/files/0b14318188/Pathways_Commission_Final_Report_Complete.pdf

"Scholars Receive American Accounting Association Award,"  by Terri Eyden, AccountingWeb, January 28, 2013 ---
http://www.accountingweb.com/article/scholars-receive-american-accounting-association-award/220891?source=education 

This January, the American Institute of CPAs (AICPA) and Chartered Institute of Management Accountants (CIMA) announced the recipients of the American Accounting Association's (AAA) Greatest Potential Impact on Management Accounting Practice Award for 2012. The award was presented to Ramji Balakrishnan, Eva Labro, and Konduru Sivaramakrishnan for their paper, Product Costs as Decision Aids: An Analysis of Alternative Approaches, which was published in Accounting Horizons, an AAA publication.

The award was presented at the AAA 2013 Management Accounting Section Conference in New Orleans, Louisiana, January 10-12, 2013, by Anne Farrell, PricewaterhouseCoopers-endowed assistant professor chair in accountancy, Farmer School of Business, Miami University, Oxford, Ohio; chair of the selection committee; and AAA Management Accounting Section (MAS) liaison to the AICPA Business & Industry Executive Committee.

According to the AICPA, the award recognizes academic papers that are considered the most likely to have a significant impact on management accounting practice. It is sponsored by the AICPA and CIMA, who are "working to elevate management accounting around the world and together created the Chartered Global Management Accountant (CGMA) designation to distinguish professionals who excel in the field."

Eligible papers must have been published within the previous five years and submitted by the authors or nominated by peers. The sponsorship value is $2,000.

Balakrishnan he and his colleagues are especially appreciative of both institutes' commitment to supporting academic research in the area of management accounting.

"A lot of academic research in accounting today is in the realm of financial reporting, with a focus on publically listed firms for which extensive data sets are available for large-scale archival research," Balakrishnan said. "We are grateful for AICPA and CIMA's support of management accounting research because it provides the needed impetus to direct some of the research focus on measurement issues and decision tools that are key to enhancing operational efficiencies of any firm, whether public or private and of all sizes."

The award was created in 2009 to support the next generation of management accounting researchers and to recognize the importance of research to practice and the profession. Management accounting is a core discipline for the institute's members in business, industry, and government, according to the AICPA.

Continued in article

Jensen Comment
Although this research has not yet shown evidence of adoption in business firms around the world, it certainly becomes a candidate for addition to the following table.

In Canada
"Shift to Applied Research Triggers Protests," by Karen Birchard 
and Jennifer Lewington, Chronicle of Higher Education, February 9. 2014 ---
http://chronicle.com/article/Shift-to-Applied-Research/144659/

What is the purpose of university research? Should it be driven by intellectual curiosity or focused on satisfying immediate national needs? American higher education has long grappled with those questions, and today it is a global debate. Academics worldwide are becoming more vocal about their concerns.

Many agree that a proper balance can be struck between research that has an immediate benefit to the economy and research that opens the door for future discoveries. But for now, the balance may be off. In the following collection of articles, read more about three countries where scholars are taking steps to fight what they believe is a troubling focus on short-term, economic gains: Canada, Germany, and Britain.

Canada’s National Research Council has long been the country’s premier scientific institution, with its researchers helping to produce such inventions as the pacemaker and the robotic arm used on the American space shuttle. But last year its mission changed.

The Canadian government announced a transformation of the 98-year-old agency, once focused largely on basic research, into a one-stop "concierge service" to bolster historically weak technological innovation by industry and generate high-quality jobs.

The move has set off a row over the future of Canada’s capacity to carry out fundamental research, with university scientists and academic organizations uncharacteristically vocal about the government’s blunt preference to harness research for commercial needs.

"We are not sure the government appreciates the role that basic research plays," says Kenneth Ragan, a McGill University physicist and president of the Canadian Association of Physicists. "The real question is: How does it view not-​­directed, nonindustrial, curiosity-driven blue-sky research? I worry the view is that it is irrelevant at best and that in many cases they actually dislike it."

The remodeling of the research council is one in a series of policy changes that have generated fierce pushback by Canadian academe in recent years. The Conservative government of Prime Minister Stephen Harper is also under fire for closing research libraries, shutting down research facilities like the world-renowned Experimental Lakes Area, and restricting when government scientists can speak publicly about their work.

Last year the Canadian Association of University Teachers began a national campaign, "Get Science Right," with town-hall meetings across the country to mobilize public opposition to the policies. Scientists have even taken to the streets of several Canadian cities in protest.

While the transformation of the National Research Council has been criticized, the government as well as some science-policy analysts say better connecting businesses with research is an important step for Canada.

Having examined models in other countries, the National Research Council chose to streamline its operations to act as "the pivot between the two worlds" of industry and academics, with an eye toward new products and innovations, says Charles Drouin, a spokesman for the council. He says the agency has not moved away from support for fundamental research, but wants to focus such efforts better. "There is basic research, but it is directed as opposed to undirected as you would find it in universities."

Another battleground for the future of basic research has been the Natural Sciences and Engineering Research Council, a federal granting agency that serves as the first stop for support of fundamental research by Canadian scientists.

Continued in article

Jensen Comment
In the USA collegiate applied research varies greatly by discipline. Schools of engineering, medicine, and law have invented countless things of keep interest to the practicing professions. It is less so for schools of business and much, much less so for schools of accounting.

The Pathways Commission makes a concerted effort for academic accounting researchers to become more engaged in doing research of practitioner needs. This, in my viewpoint, is less likely than growing coconut palm trees in New Hampshire ten years from now.

 

I would like to challenge subscribers of the AECM to fill out the following table:

  Practitioner
Clinical
Application

Invented by
Accounting Professor

1 Balanced Scorecard ---
http://en.wikipedia.org/wiki/Balanced_scorecard
Bob Kaplan (shared invention)
2 REA ---
https://www.msu.edu/~mccarth4/McCarthy.pdf
Bill McCarthy
3 Business Budgeting in 1922
http://en.wikipedia.org/wiki/Budgeting
James O. McKinsey
4    
5    
6    
7    
8    
9    
10    

 

This challenge is very easy for practitioner clinical applications in medicine, natural science, social science, computer science, engineering, and finance. It's not so easy to find where inventions/discoveries by accounting professors made splashes in the practitioner pond. It might be questioned whether Bob Kaplan invented all the components of the popular Balanced Scorecard widely applied by corporations around the world. An earlier version in 1987 was invented by a practitioner named Art Schneiderman. But I think Bob Kaplan beginning in 1990 made so many seminal contributions to the scorecard that I will give him credit for the invention that made a huge splash in the practitioner pond.

When I was the 1986 Program Director for NYC Annual Meetings of American Accounting Association I posed this challenge to Joel Demski to address in his plenary session (shared with Bob Kaplan). Joel suggested the practitioner applications of Dollar-Value LIFO. Subsequently, accounting historian Dale Flesher dug into this and discovered that DVL was invented by Herbert T. McAnly who retired in 1964 as a partner at Ernst & Ernst after 44 years with the firm

The Seminal Contributions to Accounting Literature Award of the American Accounting Association are as follows ---
http://aaahq.org/awards/awrd2win.htm

2007 — "Relevance Lost: The Rise and Fall of Management Accounting"
by H. Thomas Johnson and Robert S. Kaplan
Harvard Business School Press 1987

2004 — "Towards a Positive Theory of the Determination of Accounting Standards"
by Ross L. Watts and Jerold L. Zimmerman
The Accounting Review (January) 1978

1994 — "Economic Incentives in budgetary Control Systems"
by Joel S. Demski and Gerald A. Feltham
The Accounting Review (April) 1978

1989 — "Information Content of Annual Earnings Announcements"
by William H. Beaver
Journal of Accounting Research 1968

1986 — "An Empirical Evaluation of Accounting Income Numbers"
by Ray Ball and Philip Brown
Journal of Accounting Research 1968

These are all tremendous contributions to the academic side of accountancy. However, none of the inventions of Professors Demski and Feltham to my knowledge made a splash in the practitioner pond. ABC costing focused upon by Johnson and Kaplan made a splash in the practitioner pond, but ABC costing was invented by cost accountants at John Deere.

The contributions of Watts, Zimmerman, Beaver, Ball, and Brown made splashes of sorts in the practice pond, but I have difficulty calling them seminal "inventions." In these instances the authors were extending into accounting inventions attributed earlier to professors and practitioners in economics and finance.

There are many other accounting professors who made seminal contributions to the academic side of accountancy. For example, Yuji Ijiri is a Hall of Famer who had many noteworthy accountancy inventions. However, to my knowledge Yuji did not make a ripple in the practitioner pond except maybe for selected practitioners trying to fend against the takeover of historical cost accounting by fair value accounting. Many seminal inventions of Yuji, like the "Force," were just not deemed practical.

My own published research is best described as extensions and/or applications invented by others ---
http://www.trinity.edu/rjensen/Resume.htm#Published
To my knowledge none of my extensions made so much as a ripple in the practitioner pond.

January 19, 2013 reply from Dan Stone

A great idea.... which would probably be better in a research paper than on a list.

Anna Cianci and Bob Ashton published a paper a few years ago demonstrating how the KPMG audit research support initiative led to changes in auditor / audit firm practices.

So maybe:

idea: the application of cognitive biases and decision aiding to audit practice Professors: a large cast many of whom got their PhD at Univ. of Illinois in the 1960s and 1970s including Bob Ashton, Bob Libby, Kathryn Kadous, and many, many others

idea: the risk based audit Professors: KPMG monograph by Howard Thomas, Ira Solomon, Marc Peecher (along with many others)

Dan Stone

 

January 20, 2013 reply from Bob Jensen

Hi Dan,

Thanks for the added considerations.

Among other things, your post suggests that some "inventions" do not have short names.

Some of your suggestions do need further research into where credit can be given for the very first inventions of what eventually made a splash in the practitioner pond.

For example, does anybody (Miklos?) on the AECM know of where the concept of Risk-Based Auditing had its original starting point? I fear that it may be like Dollar Based LIFO where accounting professors picked up on the seminal idea of a practitioner. For example, did some employee of the  Arthur Andersen accounting firm, that took risk-based auditing to its own demise, also invent the concept itself?

Robert Knechel (University of Florida) supposedly traced the history of risk-based auditing, but I've not seen his paper in this regard.

Respectfully,
Bob Jensen

"Academic Research With Mass Appeal," by Erin Zlome, Bloomberg Business Week, January 28, 2013 ---
http://www.businessweek.com/articles/2013-01-28/academic-research-with-mass-appeal

Business professors are great at writing jargon-filled, hard-to-digest research papers. But every once and a while, they knock it out of the park with the general public. A small pool of research achieved such blockbuster status in 2012 by becoming the most read, most downloaded, or most written-about pieces authored by professors at top business schools. Tax evasion, finding a job, and the benefits of teaching employees Spanish are some of the topics that got non-students reading.

At Harvard Business School, an excerpt from Clayton Christensen’s book How Will You Measure Your Life? was the year’s most read preview of forthcoming research. The passage uses the downfall of Blockbuster and the rise of Netflix (NFLX) as an analogy for how we may end up paying a high cost for small decisions.

Continued in article

January 31, 2013 reply from Dale Flesher

Bob:

Although they didn’t invent it, Johnson and Kaplan deserve credit for rediscovering and popularizing Activity-Based Costing.  As I recall, Alexander Hamilton Church described ABC as early as 1908, but without computers it wasn’t practical.

Also, James O. McKinsey, an accounting professor at the University of Chicago and 1924 AAA president who later founded McKinsey & Co., is credited with inventing the concept of business budgeting with the publication of his 1922 book on the subject.  Previously, budgeting had been considered a governmental topic.  Industry accountants (such as Donaldson Brown at General Motors, who had previously invented the DuPont Formula) applied McKinsey’s concepts and developed them further.  For example, GM (and also Westinghouse) developed flexible budgeting by 1928, which was not considered by McKinsey.

Dale

February 5, 2013 reply from Steve Zeff

In 1989, Nick Dopuch wrote, "Because of its practical implications, audit judgement research is regarded as having had the biggest impact on practice of any area of research in accounting/auditing" - p. 54 in Frecka (editor), The State of Accounting Research As We Enter the 1990's - Illinois PhD Jubilee 1939-1959 (University of Illinois, 1989).

Steve.

February 6, 2013 reply from Bob Jensen

My problem, in terms of my table, is that virtually all judgment research in accounting that I've encountered applies earlier inventions from other disciplines. Another problem with judgment research is that except in rare instances like Balanced Scorecard the practitioners applying judgment models have no clue as to a link between an academic accounting researcher and practice.

This shortage of academic seminal inventions seems to be unique to the accounting profession. In nearly every other profession like engineering, medicine, economics, finance, marketing, management, sociology, psychology, education, etc. the table that I proposed filling could be filled in a New York minute with names of academic professor inventions and inventors linked to the practice of these professions.

For example, eigenvector scaling of paired-comparison decision alternatives is somewhat widely applied in business. Those practitioners applying it most likely recall the seminal contributions of mathematician Tom Saaty to what is now termed the Analytical Hierarchy Process (Tom's terminology) of business judgment. But those of us who applied AHP in accounting judgment research are long forgotten --- search for "eigenvector" at
http://www.trinity.edu/rjensen/Resume.htm#Published

Analytic Hierarchy Process ---
http://en.wikipedia.org/wiki/Analytic_hierarchy_process

I'm probably stretching it to add a third name to the table below:

  Practitioner
Clinical
Application

Invented by
Accounting Professor

1 Balanced Scorecard ---
http://en.wikipedia.org/wiki/Balanced_scorecard
Bob Kaplan (shared invention)
2 REA ---
https://www.msu.edu/~mccarth4/McCarthy.pdf
Bill McCarthy
3 Business Budgeting in 1922
http://en.wikipedia.org/wiki/Budgeting
James O. McKinsey
4    
5    
6    
7    
8    
9    
10    

 

 

MIT, like Harvard, places enormous value on having both feet planted in the real world

The professions of architecture, engineering, law, and medicine are heavily dependent upon the researchers in universities who focus on needs for research on the problems of practitioners working in the real world.

If accountics scientists want to change their ways and focus more on problems of the accounting practitioners working in the real world, one small step that can be taken is to study the presentations scheduled for a forthcoming MIT Sloan School Conference.

Financial Education Daily, May 2012 ---
http://paper.li/businessschools?utm_source=subscription&utm_medium=email&utm_campaign=paper_sub

Learning best practice from the best practitioners

MIT Sloan invites more than 400 of the world’s finest leaders to campus every year. The most anticipated of these visits are the talks given as part of the Dean’s Innovative Leader Series, which features the most dynamic movers and shakers of our day.

At a school that places enormous value on having both feet planted in the real world, the Dean’s Innovative Leader Series is a powerful learning tool. Students have the rare privilege of engaging in frank and meaningful discussions with the leaders who are shaping the present and future marketplace.

Bob Jensen's threads on other steps that should be taken by accountics scientists to become more focused on the needs of the profession ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


Why genius lies in the selection of what is worth observing.
"The Art of Observation and How to Master the Crucial Difference Between Observation and Intuition," by Maria Popova, Brain Pickings, March 29, 2013
http://www.brainpickings.org/index.php/2013/03/29/the-art-of-observation/
This selection has a number of historic photographs of well-known scientists --- all women!

“In the field of observation,” legendary disease prevention pioneer Louis Pasteur famously proclaimed in 1854, “chance favors only the prepared mind.” “Knowledge comes form noticing resemblances and recurrences in the events that happen around us,” neuroscience godfather Wilfred Trotter asserted. That keen observation is what transmutes information into knowledge is indisputable — look no further than Sherlock Holmes and his exquisite mindfulness for a proof — but how, exactly, does one cultivate that critical faculty?

From The Art of Scientific Investigation (public library; public domain) by Cambridge University animal pathology professor W. I. B. Beveridge — the same fantastic 1957 compendium that explored the role of the intuition and imagination in science and how serendipity and “chance opportunism” fuel discovery — comes a timeless meditation on the art of observation, which he insists “is not passively watching but is an active mental process,” and the importance of distinguishing it from what we call intuition.

Though a number of celebrated minds favored intuition over rationality, and even Beveridge himself extolled the merits of the intuitive in science, he sides with modern-day admonitions about our tendency to mislabel other cognitive processes as “intuition” and advises:

It is important to realize that observation is much more than merely seeing something; it also involves a mental process. In all observations there are two elements : (a) the sense-perceptual element (usually visual) and (b) the mental, which, as we have seen, may be partly conscious and partly unconscious. Where the sense-perceptual element is relatively unimportant, it is often difficult to distinguish between an observation and an ordinary intuition. For example, this sort of thing is usually referred to as an observation: “I have noticed that I get hay fever whenever I go near horses.” The hay fever and the horses are perfectly obvious, it is the connection between the two that may require astuteness to notice at first, and this is a mental process not distinguishable from an intuition. Sometimes it is possible to draw a line between the noticing and the intuition, e.g. Aristotle commented that on observing that the bright side of the moon is always toward the sun, it may suddenly occur to the observer that the explanation is that the moon shines by the light of the sun.

For the practical applications of observation, Beveridge turns to French physiologist Claude Bernard’s model, pointing out the connection-making necessary for creativity:

Claude Bernard distinguished two types of observation: (a) spontaneous or passive observations which are unexpected; and (b) induced or active observations which are deliberately sought, usually on account of an hypothesis. … Effective spontaneous observation involves firstly noticing some object or event. The thing noticed will only become significant if the mind of the observer either consciously or unconsciously relates it to some relevant knowledge or past experience, or if in pondering on it subsequently he arrives at some hypothesis. In the last section attention was called to the fact that the mind is particularly sensitive to changes or differences. This is of use in scientific observation, but what is more important and more difficult is to observe (in this instance mainly a mental process) resemblances or correlations between things that on the surface appeared quite unrelated.

Echoing Jean Jacques Rousseau’s timeless words that “real wisdom is not the knowledge of everything, but the knowledge of which things in life are necessary, which are less necessary, and which are completely unnecessary to know” and Noam Chomsky’s similar assertion centuries later, Beveridge cautions:

One cannot observe everything closely, therefore one must discriminate and try to select the significant. When practicing a branch of science, the ‘trained’ observer deliberately looks for specific things which his training has taught him are significant, but in research he often has to rely on his own discrimination, guided only by his general scientific knowledge, judgment and perhaps an hypothesis which he entertains.

Continued in article

Jensen Comment
This begs the question of why creativity in accounting research is a rare event in terms of original inventions in the halls of our Academy ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Inventors

This is especially discouraging over the past five decades as accounting research in our Academy became dominated by accountics scientists ---
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm

We close with a quotation from Scott McLemee demonstrating that what happened among accountancy academics over the past four decades is not unlike what happened in other academic disciplines that developed “internal dynamics of esoteric disciplines,” communicating among themselves in loops detached from their underlying professions. McLemee’s [2006] article stems from Bender [1993].

 “Knowledge and competence increasingly developed out of the internal dynamics of esoteric disciplines rather than within the context of shared perceptions of public needs,” writes Bender. “This is not to say that professionalized disciplines or the modern service professions that imitated them became socially irresponsible. But their contributions to society began to flow from their own self-definitions rather than from a reciprocal engagement with general public discourse.”

 

Now, there is a definite note of sadness in Bender’s narrative – as there always tends to be in accounts of the shift from Gemeinschaft to Gesellschaft. Yet it is also clear that the transformation from civic to disciplinary professionalism was necessary.

 

“The new disciplines offered relatively precise subject matter and procedures,” Bender concedes, “at a time when both were greatly confused. The new professionalism also promised guarantees of competence — certification — in an era when criteria of intellectual authority were vague and professional performance was unreliable.”

But in the epilogue to Intellect and Public Life, Bender suggests that the process eventually went too far. “The risk now is precisely the opposite,” he writes. “Academe is threatened by the twin dangers of fossilization and scholasticism (of three types: tedium, high tech, and radical chic). The agenda for the next decade, at least as I see it, ought to be the opening up of the disciplines, the ventilating of professional communities that have come to share too much and that have become too self-referential.”

Academic Versus Political Reporting of Research:  Percentage Columns Versus Per Capita Columns ---
http://www.cs.trinity.edu/~rjensen/temp/TaxAirlineSeatCase.htm
by Bob Jensen, April 3, 201


Accountics Scientists Aren't Going to Like This One
"Great Scientist ≠ Good at Math:  E.O. Wilson shares a secret: Discoveries emerge from ideas, not number-crunching," E.O. Wilson, The Wall Street Journal, April 5, 2013 ---
http://online.wsj.com/article/SB10001424127887323611604578398943650327184.html

For many young people who aspire to be scientists, the great bugbear is mathematics. Without advanced math, how can you do serious work in the sciences? Well, I have a professional secret to share: Many of the most successful scientists in the world today are mathematically no more than semiliterate.

During my decades of teaching biology at Harvard, I watched sadly as bright undergraduates turned away from the possibility of a scientific career, fearing that, without strong math skills, they would fail. This mistaken assumption has deprived science of an immeasurable amount of sorely needed talent. It has created a hemorrhage of brain power we need to stanch.

I speak as an authority on this subject because I myself am an extreme case. Having spent my precollege years in relatively poor Southern schools, I didn't take algebra until my freshman year at the University of Alabama. I finally got around to calculus as a 32-year-old tenured professor at Harvard, where I sat uncomfortably in classes with undergraduate students only a bit more than half my age. A couple of them were students in a course on evolutionary biology I was teaching. I swallowed my pride and learned calculus.

I was never more than a C student while catching up, but I was reassured by the discovery that superior mathematical ability is similar to fluency in foreign languages. I might have become fluent with more effort and sessions talking with the natives, but being swept up with field and laboratory research, I advanced only by a small amount.

Fortunately, exceptional mathematical fluency is required in only a few disciplines, such as particle physics, astrophysics and information theory. Far more important throughout the rest of science is the ability to form concepts, during which the researcher conjures images and processes by intuition.

Everyone sometimes daydreams like a scientist. Ramped up and disciplined, fantasies are the fountainhead of all creative thinking. Newton dreamed, Darwin dreamed, you dream. The images evoked are at first vague. They may shift in form and fade in and out. They grow a bit firmer when sketched as diagrams on pads of paper, and they take on life as real examples are sought and found.

Pioneers in science only rarely make discoveries by extracting ideas from pure mathematics. Most of the stereotypical photographs of scientists studying rows of equations on a blackboard are instructors explaining discoveries already made. Real progress comes in the field writing notes, at the office amid a litter of doodled paper, in the hallway struggling to explain something to a friend, or eating lunch alone. Eureka moments require hard work. And focus.

Ideas in science emerge most readily when some part of the world is studied for its own sake. They follow from thorough, well-organized knowledge of all that is known or can be imagined of real entities and processes within that fragment of existence. When something new is encountered, the follow-up steps usually require mathematical and statistical methods to move the analysis forward. If that step proves too technically difficult for the person who made the discovery, a mathematician or statistician can be added as a collaborator.

In the late 1970s, I sat down with the mathematical theorist George Oster to work out the principles of caste and the division of labor in the social insects. I supplied the details of what had been discovered in nature and the lab, and he used theorems and hypotheses from his tool kit to capture these phenomena. Without such information, Mr. Oster might have developed a general theory, but he would not have had any way to deduce which of the possible permutations actually exist on earth.

Over the years, I have co-written many papers with mathematicians and statisticians, so I can offer the following principle with confidence. Call it Wilson's Principle No. 1: It is far easier for scientists to acquire needed collaboration from mathematicians and statisticians than it is for mathematicians and statisticians to find scientists able to make use of their equations.

This imbalance is especially the case in biology, where factors in a real-life phenomenon are often misunderstood or never noticed in the first place. The annals of theoretical biology are clogged with mathematical models that either can be safely ignored or, when tested, fail. Possibly no more than 10% have any lasting value. Only those linked solidly to knowledge of real living systems have much chance of being used.

If your level of mathematical competence is low, plan to raise it, but meanwhile, know that you can do outstanding scientific work with what you have. Think twice, though, about specializing in fields that require a close alternation of experiment and quantitative analysis. These include most of physics and chemistry, as well as a few specialties in molecular biology.

Newton invented calculus in order to give substance to his imagination. Darwin had little or no mathematical ability, but with the masses of information he had accumulated, he was able to conceive a process to which mathematics was later applied.

Continued in article

Jensen Comment
Thus far I've come up with two inventions plus one shared inventions by accounting researchers in our Academy. Can anybody add to this list ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Inventors



Robustness Issues

Robust Statistics --- http://en.wikipedia.org/wiki/Robust_statistics

"ECONOMICS AS ROBUSTNESS ANALYSIS," by Jaakko Kuorikoski, Aki Lehtinen and Caterina Marchionn, he University of Pittsburgh, 2007 ---
http://philsci-archive.pitt.edu/3550/1/econrobu.pdf

ECONOMICS AS ROBUSTNESS ANALYSIS
Jaakko Kuorikoski, Aki Lehtinen and Caterina Marchionni
25.9. 2007
1. Introduction ..................................................................................................................... 1
2. Making sense of robustness............................................................................................ 4
3. Robustness in economics................................................................................................ 6
4. The epistemic import of robustness analysis................................................................. 8
5. An illustration: geographical economics models ........................................................ 13
6. Independence of derivations......................................................................................... 18
7. Economics as a Babylonian science ............................................................................ 23
8. Conclusions ...................................................................................................................
 

1.Introduction
Modern economic analysis consists largely in building abstract mathematical models and deriving familiar results from ever sparser modeling assumptions is considered as a theoretical contribution. Why do economists spend so much time and effort in deriving same old results from slightly different assumptions rather than trying to come up with new and exciting hypotheses? We claim that this is because the process of refining economic models is essentially a form of robustness analysis. The robustness of modeling results with respect to particular modeling assumptions, parameter values or initial conditions plays a crucial role for modeling in economics for two reasons. First, economic models are difficult to subject to straightforward empirical tests for various reasons. Second, the very nature of economic phenomena provides little hope of ever making the modeling assumptions completely realistic. Robustness analysis is therefore a natural methodological strategy for economists because economic models are based on various idealizations and abstractions which make at least some of their assumptions unrealistic (Wimsatt 1987; 1994a; 1994b; Mäki 2000; Weisberg 2006b). The importance of robustness considerations in economics ultimately forces us to reconsider many commonly held views on the function and logical structure of economic theory.

Given that much of economic research praxis can be characterized as robustness analysis, it is somewhat surprising that philosophers of economics have only recently become interested in robustness. William Wimsatt has extensively discussed robustness analysis, which he considers in general terms as triangulation via independent ways of determination . According to Wimsatt, fairly varied processes or activities count as ways of determination: measurement, observation, experimentation, mathematical derivation etc. all qualify. Many ostensibly different epistemic activities are thus classified as robustness analysis. In a recent paper, James Woodward (2006) distinguishes four notions of robustness. The first three are all species of robustness as similarity of the result under different forms of determination. Inferential robustness refers to the idea that there are different degrees to which inference from some given data may depend on various auxiliary assumptions, and derivational robustness to whether a given theoretical result depends on the different modelling assumptions. The difference between the two is that the former concerns derivation from data, and the latter derivation from a set of theoretical assumptions. Measurement robustness means triangulation of a quantity or a value by (causally) different means of measurement. Inferential, derivational and measurement robustness differ with respect to the method of determination and the goals of the corresponding robustness analysis. Causal robustness, on the other hand, is a categorically different notion because it concerns causal dependencies in the world, and it should not be confused with the epistemic notion of robustness under different ways of determination.

In Woodward’s typology, the kind of theoretical model-refinement that is so common in economics constitutes a form of derivational robustness analysis. However, if Woodward (2006) and Nancy Cartwright (1991) are right in claiming that derivational robustness does not provide any epistemic credence to the conclusions, much of theoretical model- building in economics should be regarded as epistemically worthless. We take issue with this position by developing Wimsatt’s (1981) account of robustness analysis as triangulation via independent ways of determination. Obviously, derivational robustness in economic models cannot be a matter of entirely independent ways of derivation, because the different models used to assess robustness usually share many assumptions. Independence of a result with respect to modelling assumptions nonetheless carries epistemic weight by supplying evidence that the result is not an artefact of particular idealizing modelling assumptions. We will argue that although robustness analysis, understood as systematic examination of derivational robustness, is not an empirical confirmation procedure in any straightforward sense, demonstrating that a modelling result is robust does carry epistemic weight by guarding against error and by helping to assess the relative importance of various parts of theoretical models (cf. Weisberg 2006b). While we agree with Woodward (2006) that arguments presented in favour of one kind of robustness do not automatically apply to other kinds of robustness, we think that the epistemic gain from robustness derives from similar considerations in many instances of different kinds of robustness.

In contrast to physics, economic theory itself does not tell which idealizations are truly fatal or crucial for the modeling result and which are not. Economists often proceed on a preliminary hypothesis or an intuitive hunch that there is some core causal mechanism that ought to be modeled realistically. Turning such intuitions into a tractable model requires making various unrealistic assumptions concerning other issues. Some of these assumptions are considered or hoped to be unimportant, again on intuitive grounds. Such assumptions have been examined in economic methodology using various closely related terms such as Musgrave’s (1981) heuristic assumptions, Mäki’s (2000) early step assumptions, Hindriks’ (2006) tractability assumptions and Alexandrova’s (2006) derivational facilitators. We will examine the relationship between such assumptions and robustness in economic model-building by way of discussing a case: geographical economics. We will show that an important way in which economists try to guard against errors in modeling is to see whether the model’s conclusions remain the same if some auxiliary assumptions, which are hoped not to affect those conclusions, are changed. The case also demonstrates that although the epistemological functions of guarding against error and securing claims concerning the relative importance of various assumptions are somewhat different, they are often closely intertwined in the process of analyzing the robustness of some modeling result.

. . .

8. Conclusions
The practice of economic theorizing largely consists of building models with slightly different assumptions yielding familiar results. We have argued that this practice makes sense when seen as derivational robustness analysis. Robustness analysis is a sensible epistemic strategy in situations where we know that our assumptions and inferences are fallible, but not in what situations and in what way. Derivational robustness analysis guards against errors in theorizing when the problematic parts of the ways of determination, i.e. models, are independent of each other. In economics in particular, proving robust theorems from different models with diverse unrealistic assumptions helps us to evaluate what results correspond to important economic phenomena and what are merely artefacts of particular auxiliary assumptions. We have addressed Orzack and Sober’s criticism against robustness as an epistemically relevant feature by showing that their formulation of the epistemic situation in which robustness analysis is useful is misleading. We have also shown that their argument actually shows how robustness considerations are necessary for evaluating what a given piece of data can support. We have also responded to Cartwright’s criticism by showing that it relies on an untenable hope of a completely true economic model.

Viewing economic model building as robustness analysis also helps to make sense of the role of the rationality axioms that apparently provide the basis of the whole enterprise. Instead of the traditional Euclidian view of the structure of economic theory, we propose that economics should be approached as a Babylonian science, where the epistemically secure parts are the robust theorems and the axioms only form what Boyd and Richerson call a generalized sample theory, whose the role is to help organize further modelling work and facilitate communication between specialists.

 

Jensen Comment
As I've mentioned before I spent a goodly proportion of my time for two years in a think tank trying to invent adaptive regression and cluster analysis models. In every case the main reasons for my failures were lack of robustness. In particular, if any two models feeding in predictor variables w, x, y, and z generated different outcomes that were not robust in terms of the time ordering of the variables feeding into the algorithms. This made the results dependent of dynamic programming which has rarely been noted for computing practicality ---
http://en.wikipedia.org/wiki/Dynamic_programming

 

 



"Overreliance on the Pseudo-Science of Economics," by Ethan Fosse and Orlando Patterson, The New York Times, February 9, 2015 ---
http://www.nytimes.com/roomfordebate/2015/02/09/are-economists-overrated/overreliance-on-the-pseudo-science-of-economics

Every year a Nobel Prize in Economics is awarded when in fact there is no “Nobel Prize in Economics.” There is only a “Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.” That prize, which was invented by the Swedish central bank nearly 75 years after Alfred Nobel’s death, is an annoyance to the recipients of the five actual Nobel Prizes, those scholars from excluded scientific disciplines such as astronomy, and a living descendant of the donor, Peter Nobel, who has denounced it as a “PR coup by economists.”

This raises the question: Have we given economists too much authority based on mistaken views about their scientific reputation among established scientists and the public?

When asked about the degree to which various academic fields can be considered “scientific,” the American public is decidedly more mixed toward economics, ranking it well below established scientific fields such as physics or biology, and even below sociology.

It’s not the statistical models used by economists that is the problem, but the rejection of qualitative methods, other fields and viewpoints. The gulf between the economic view of the world and that of the lived experiences of the general population is often vast. For example, in June 2009, the National Bureau of Economic Research declared that the United States was no longer in a recession, in stark contrast with the felt, economic experience of 88 percent of Americans the following year.

It’s no wonder, then, that the real-world implementation of mainstream economic ideas has been a string of massive failures. Economic thinking undergirded the “deregulation” mantra leading up to the Great Recession of 2007-2009, and has fared no better in attempts to “fix” the ongoing crisis in Europe. However, nowhere is the discipline’s failure more apparent than in the area of development economics. In fact, the only countries that have effectively transformed from the “Third” to the “First World” since World War II violated the main principles of current and previous economic orthodoxies: China plus the “East Asian Tigers” of Singapore, Hong Kong, Taiwan and South Korea, whose policies entailed extensive state intervention into the economy, institutional reforms and the manipulation of prices and markets. Only recently have economists come to accept the primacy of institutions in explaining and promoting economic growth, a position long held by sociologists and political scientists.

The dominance of economistic thinking in domestic policymaking has similarly led to expensive, frequently disastrous failures. In many of these instances the expertise of sociologists and other academics more suited to the topics at hand were ignored or thoroughly rejected. A clear case in point is the Moving to Opportunity program, a randomized experiment in the 1990s that moved poor families to slightly less poor neighborhoods. Controversially, the researchers found no impact on earnings or educational attainment. The backlash was severe and swift, as sociologists, many of whom had been studying the impact of neighborhoods on poverty for decades, appropriately criticized the limited intervention and narrow focus on a small set of outcomes over a relatively short time period. It also meant scuttling policies that might have resulted in desegregation and real improvements in the housing and life chances of residents of America’s most impoverished neighborhoods.

While the annual ritual of economists awarding themselves a "Nobel Prize in Economics" may seem purely academic, the devastating consequences of placing too much authority in the ideas and policies of economists is too important to ignore.

Jensen Comment
I was tempted to write "ditto for accountics science," but pseudo science is more problematic in economics than accounting, because the media and practice world in economics often pays attention to "scientists" in economics. The media and practice world virtually ignores academic research in accountics science. There are quite a lot of citations in accountics science but those are accountics scientists citing each other. It's more or less a closed loop in the "Cargo Cult" world of accoutics science.
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#CargoCult

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

By way of illustration there are many blogs by economics scientists attempting to communicate with the economics profession and the public in general.  I don't know that a single accountics scientist maintains a blog trying to communicate with anybody..

The New York Times Debate
Are Economists Overrated?
February 9, 2015
http://www.nytimes.com/roomfordebate/2015/02/09/are-economists-overrated

 


From the Stanford University Encyclopedia of Philosophy
Science and Pseudo-Science --- http://plato.stanford.edu/entries/pseudo-science/

The demarcation between science and pseudoscience is part of the larger task to determine which beliefs are epistemically warranted. The entry clarifies the specific nature of pseudoscience in relation to other forms of non-scientific doctrines and practices. The major proposed demarcation criteria are discussed and some of their weaknesses are pointed out. In conclusion, it is emphasized that there is much more agreement in particular issues of demarcation than on the general criteria that such judgments should be based upon. This is an indication that there is still much important philosophical work to be done on the demarcation between science and pseudoscience.

1. The purpose of demarcations
2. The “science” of pseudoscience
3. The “pseudo” of pseudoscience

3.1 Non-, un-, and pseudoscience
3.2 Non-science posing as science
3.3 The doctrinal component
3.4 A wider sense of pseudoscience
3.5 The objects of demarcation 3.6 A time-bound demarcation

4. Alternative demarcation criteria

4.1 The logical positivists
4.2 Falsificationism
4.3 The criterion of puzzle-solving
4.4 Criteria based on scientific progress
4.5 Epistemic norms 4.6 Multi-criterial approaches

5. Unity in diversity Bibliography

Bibliography of philosophically informed literature on pseudosciences and contested doctrines

Other Internet resources Related Entries

Bibliography

Cited Works

Paul Feyerabend --- http://plato.stanford.edu/entries/feyerabend/

William Thomas Ziemba --- http://www.williamtziemba.com/WilliamZiemba-ShortCV.pdf

Thomas M. Cover --- http://en.wikipedia.org/wiki/Thomas_M._Cover

On June 15, 2013 David Johnstone wrote the following:

Dear all,
I worked on the logic and philosophy of hypothesis tests in the early 1980s and discovered a very large literature critical of standard forms of testing, a little of which was written by philosophers of science (see the more recent book by Howson and Urbach) and much of which was written by statisticians. At this point philosophy of science was warming up on significance tests and much has been written since. Something I have mentioned to a few philosophers however is how far behind the pace philosophy of science is in regard to all the new finance and decision theory developed in finance (e.g. options logic, mean-variance as an expression of expected utility). I think that philosophers would get a rude shock on just how clever and rigorous all this thinking work in “business” fields is. There is also wonderfully insightful work on betting-like decisions done by mathematicians, such as Ziemba and Cover, that has I think rarely if ever surfaced in the philosophy of science (“Kelly betting” is a good example). So although I believe modern accounting researchers should have far more time and respect for ideas from the philosophy of science, the argument runs both ways.

Jensen Comment
Note that in the above "cited works" there are no cited references in statistics such as Ziemba and Cover or the better known statistical theory and statistical science references.

This suggests somewhat the divergence of statistical theory from philosophy theory with respect to probability and hypothesis testing. Of course probability and hypothesis testing are part and parcel to both science and pseudo-science. Statistical theory may accordingly be a subject that divides pseudo-science and real science.

Etymology provides us with an obvious starting-point for clarifying what characteristics pseudoscience has in addition to being merely non- or un-scientific. “Pseudo-” (ψευδο-) means false. In accordance with this, the Oxford English Dictionary (OED) defines pseudoscience as follows:

“A pretended or spurious science; a collection of related beliefs about the world mistakenly regarded as being based on scientific method or as having the status that scientific truths now have.”

June 16, 2013 reply from Marc Dupree

Let me try again, better organized this time.

You (Bob) have referenced sources that include falsification and demarcation. A good idea. Also, AECM participants discuss hypothesis testing and Phi-Sci topics from time to time.

I didn't make my purpose clear. My purpose is to offer that falsification and demarcation are still relevant to empirical research, any empirical research.

So,

What is falsification in mathematical form?

Why does falsification not demarcate science from non-science?

And for fun: Did Popper know falsification didn't demarcate science from non-science?

Marc

June 17, 2013 reply form Bob Jensen

Hi Marc,

Falsification in science generally requires explanation. You really have not falsified a theory or proven a theory if all you can do is demonstrate an unexplained correlation. In pseudo-science empiricism a huge problem is that virtually all our databases are not granulated sufficiently to possibly explain the discovered correlations or discovered predictability that cannot be explained ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
 
Mathematics is beautiful in many instances because theories are formulated in a way where finding a counter example ipso facto destroys the theory. This is not generally the case in the empirical sciences where exceptions (often outliers) arise even when causal mechanisms have been discovered. In genetics those exceptions are often mutations that infrequently but persistently arise in nature.
 
The key difference between pseudo-science and real-science, as I pointed out earlier in this thread, lies in explanation versus prediction (the F-twist) or causation versus correlation. When a research study concludes there is a correlation that cannot be explained we are departing from a scientific discovery.
 
Data mining research in particular suffers from inability to find causes if the granulation needed for discovery of causation just is not contained in the databases. I've hammered on this one with a Japanese research data mining accountics research illustration (from TAR) ----
"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf

 
Another huge problem in accountics science and empirical finance is statistical significance testing of correlation coefficients with enormous data mining samples. For example R-squared coefficients of 0.001 are deemed statistically significant if the sample sizes are large enough :
My threads on Deidre McCloskey (the Cult of Statistical Significance) and my own talk are at
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

 
A problem with real-science is that there's a distinction between the evolution of a theory and the ultimate discovery of the causal mechanisms. In the evolution of a theory there may be unexplained correlations or explanations that have not yet been validated (usually by replication). But genuine scientific discoveries entail explanation of phenomena. We like to think of physics and chemistry are real-sciences. In fact they deal a lot with unexplained correlations before theories can finally be explained.
 
Perhaps a difference between a pseudo-science (like accountics science) versus chemistry (a real-science) is that real scientists are never satisfied until they can explain causality to the satisfaction of their peers. Accountics scientists are generally satisfied with correlations and statistical inference tests that cannot explain root causes:
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
 
Of course science is replete with examples of causal explanations that are later falsified or demonstrated to be incomplete. But the focus is on the causal mechanisms and not mere correlations.

In Search of the Theory of Everything
 "Physics’s pangolin:  Trying to resolve the stubborn paradoxes of their field, physicists craft ever more mind-boggling visions of reality," by Margaret Wertheim, AEON Magazine, June 2013 ---
 http://www.aeonmagazine.com/world-views/margaret-wertheim-the-limits-of-physics/

Of course social scientists complain that the problem in social science research is that the physicists stole all the easy problems.

Respectfully,
 

Bob Jensen

A validity testing testimony illustration about how research needs to be replicated.
GM is also the company that bought the patent rights to the doomed Wankel Engine ---
http://en.wikipedia.org/wiki/Wankel_Engine

"The Sad Story of the Battery Breakthrough that Proved Too Good to Be True," by Kevin Bullis, MIT's Technology Review, December 6, 2013 ---
http://www.technologyreview.com/view/522361/the-sad-story-of-the-battery-breakthrough-that-proved-too-good-to-be-true/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20131209

Two lurkers on the AECM listserv forwarded the link below:
"The Replication Myth: Shedding Light on One of Science’s Dirty Little Secrets
," by Jared Horvath, Scientific American, December 4, 2013 ---
http://blogs.scientificamerican.com/guest-blog/2013/12/04/the-replication-myth-shedding-light-on-one-of-sciences-dirty-little-secrets/

In a series of recent articles published in The Economist (Unreliable Research: Trouble at the Lab and Problems with Scientific Research: How Science Goes Wrong), authors warned of a growing trend in unreliable scientific research. These authors (and certainly many scientists) view this pattern as a detrimental byproduct of the cutthroat ‘publish-or-perish’ world of contemporary science.

In actuality, unreliable research and irreproducible data have been the status quo since the inception of modern science. Far from being ruinous, this unique feature of research is integral to the evolution of science.

At the turn of the 17th century, Galileo rolled a brass ball down a wooden board and concluded that the acceleration he observed confirmed his theory of the law of the motion of falling bodies. Several years later, Marin Mersenne attempted the same experiment and failed to achieve similar precision, causing him to suspect that Galileo fabricated his experiment.

Early in the 19th century, after mixing oxygen with nitrogen, John Dalton concluded that the combinatorial ratio of the elements proved his theory of the law of multiple proportions. Over a century later, J. R. Parington tried to replicate the test and concluded that “…it is almost impossible to get these simple ratios in mixing nitric oxide and air over water.”

At the beginning of the 20th century, Robert Millikan suspended drops of oil in an electric field, concluding that electrons have a single charge. Shortly afterwards, Felix Ehrenhaft attempted the same experiment and not only failed to arrive at an identical value, but also observed enough variability to support his own theory of fractional charges.

Other scientific luminaries have similar stories, including Mendel, Darwin and Einstein. Irreproducibility is not a novel scientific reality. As noted by contemporary journalists William Broad and Nicholas Wade, “If even history’s most successful scientists resort to misrepresenting their findings in various ways, how extensive may have been the deceits of those whose work is now rightly forgotten?”

There is a larger lesson to be gleaned from this brief history. If replication were the gold standard of scientific progress, we would still be banging our heads against our benches trying to arrive at the precise values that Galileo reported. Clearly this isn’t the case.

The 1980’s saw a major upswing in the use of nitrates to treat cardiovascular conditions. With prolonged use, however, many patients develop a nitrate tolerance. With this in mind, a group of drug developers at Pfizer set to creating Sildenafil, a pill that would deliver similar therapeutic benefits as nitrates without declining efficacy. Despite its early success, a number of unanticipated drug interactions and side-effects—including penile erections—caused doctors to shelve Sildenafil. Instead, the drug was re-trialed, re-packaged and re-named Viagra. The rest is history.

This tale illustrates the true path by which science evolves. Despite a failure to achieve initial success, the results generated during Sildenafil experimentation were still wholly useful and applicable to several different lines of scientific work. Had the initial researchers been able to massage their data to a point where they were able to publish results that were later found to be irreproducible, this would not have changed the utility of a sub-set of their results for the field of male potency.

Many are taught that science moves forward in discreet, cumulative steps; that truth builds upon truth as the tapestry of the universe slowly unfolds. Under this ideal, when scientific intentions (hypotheses) fail to manifest, scientists must tinker until their work is replicable everywhere at anytime. In other words, results that aren’t valid are useless.

In reality, science progresses in subtle degrees, half-truths and chance. An article that is 100 percent valid has never been published. While direct replication may be a myth, there may be information or bits of data that are useful among the noise. It is these bits of data that allow science to evolve. In order for utility to emerge, we must be okay with publishing imperfect and potentially fruitless data. If scientists were to maintain the ideal, the small percentage of useful data would never emerge; we’d all be waiting to achieve perfection before reporting our work.

This is why Galileo, Dalton and Millikan are held aloft as scientific paragons, despite strong evidence that their results are irreproducible. Each of these researchers presented novel methodologies, ideas and theories that led to the generation of many useful questions, concepts and hypotheses. Their work, if ultimately invalid, proved useful.

Doesn’t this state-of-affairs lead to dead ends, misused time and wasted money? Absolutely. It is here where I believe the majority of current frustration and anger resides. However, it is important to remember two things: first, nowhere is it written that all science can and must succeed. It is only through failure that the limits of utility can be determined. And second, if the history of science has taught us anything, it is that with enough time all scientific wells run dry. Whether due to the achievement of absolute theoretical completion (a myth) or, more likely, the evolution of more useful theories, all concepts will reach a scientific end.

Two reasons are typically given for not wanting to openly discuss the true nature of scientific progress and the importance of publishing data that may not be perfectly replicable: public faith and funding. Perhaps these fears are justified. It is a possibility that public faith will dwindle if it becomes common knowledge that scientists are too-often incorrect and that science evolves through a morass of noise. However, it is equally possible that public faith will decline each time this little secret leaks out in the popular press. It is a possibility that funding would dry up if, in our grant proposals, we openly acknowledge the large chance of failure, if we replace gratuitous theories with simple unknowns. However, it is equally possible that funding will diminish each time a researcher fails to deliver on grandiose (and ultimately unjustified) claims of efficacy and translatability.

Continued in article

Jensen Comment
I had to chuckle that in an article belittling the role of reproducibility in science the author leads out with an illustration of how Marin Mersenne could not reproduce one of Galileo's experiments led to suspicions that the experiment was faked by Galileo. It seems to me that this illustration reinforces the importance of reproducibility/replication in science.

I totally disagree that "unreliable research and irreproducible data have been the status quo since the inception of modern science." If it really were the "status quo" then all science would be pseudo science. Real scientists are obsessed with replication to a point that modern science findings in experiments are not considered new knowledge until they have been independently validated. That of course does not mean that it's always easy or sometimes even possible to validate findings in modern science. Much of the spending in real science is devoted to validating earlier discoveries and databases to be shared with other scientists.

Real scientists are generally required by top journals and funding sources to maintain detailed lab books of steps performed in laboratories. Data collected for use by other scientists (such as ocean temperature data) is generally subjected to validation tests such that research outcomes are less likely to be based upon flawed data. There are many examples of where reputations of scientists were badly tarnished due to inability of other scientists to validate findings ---
http://www.trinity.edu/rjensen/Plagiarism.htm#ProfessorsWhoPlagiarize

Nearly all real science journals have illustrations where journal articles are later retracted because the findings could not be validated.

What the article does point out that real scientists do not always validate findings independently. What this is saying is that real science is often imperfect. But this does not necessarily make validation, reproduction, and replication of original discoveries less important. It only says that the scientists themselves often deviate from their own standards of validation.

The article does above does not change my opinion that reproducibility is the holy grail of real science. If findings are not validated what you have is imperfect implementation of a scientific process rather than imperfect standards.

Accountics science is defined at http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm
in short, an accountics science study is any accounting research study that features equations and/or statistical inference.
One of the main reasons Bob Jensen contends that accountics science is not yet a real science is that lack of exacting replications of accountics science findings. By exacting replications he means reproducibility as defined in the IAPUC Gold Book  ---
http://en.wikipedia.org/wiki/IUPAC_Gold_Book

My study of the 2013 articles in The Accounting Review suggests that over 90% of the articles rely upon public databases that are purchased, such as the CompuStat, CRSP, Datastream, and AuditAnalytics. The reason I think accountics scientists are not usually real scientists includes the following:

These and my other complaints about the lack of replications in accountics science can be found at
http://www.trinity.edu/rjensen/TheoryTAR.htm

 

The source of these oddities is Brian Dillon's intriguing Curiosity: Art and the Pleasures of Knowing (Hayward Publishing), a new volume of essays, excerpts, descriptions, and photographs that accompanies his exhibit of the same name, touring Britain and the Netherlands during 2013-14. But what does it mean to be curious?

"Triumph of the Strange," by James Delbourgo, Chronicle of Higher Education, December 8, 2013 ---
http://chronicle.com/article/Triumph-of-the-Strange/143365/?cid=cr&utm_source=cr&utm_medium=en


Video:  Noam Chomsky Calls Postmodern Critiques of Science Over-Inflated “Polysyllabic Truisms” ---
http://www.openculture.com/2013/07/noam-chomsky-calls-postmodern-critiques-of-science-over-inflated-polysyllabic-truisms.html


"Is Economics a Science," by Robert Shiller, QFinance, November 8, 2013 --- Click Here
http://www.qfinance.com/blogs/robert-j. shiller/2013/11/08/nobel-is-economics-a-science?utm_source=November+2013+email&utm_medium=Email&utm_content=Blog2&utm_campaign=EmailNov13

NEW HAVEN – I am one of the winners of this year’s Nobel Memorial Prize in Economic Sciences, which makes me acutely aware of criticism of the prize by those who claim that economics – unlike chemistry, physics, or medicine, for which Nobel Prizes are also awarded – is not a science. Are they right?

One problem with economics is that it is necessarily focused on policy, rather than discovery of fundamentals. Nobody really cares much about economic data except as a guide to policy: economic phenomena do not have the same intrinsic fascination for us as the internal resonances of the atom or the functioning of the vesicles and other organelles of a living cell. We judge economics by what it can produce. As such, economics is rather more like engineering than physics, more practical than spiritual.

There is no Nobel Prize for engineering, though there should be. True, the chemistry prize this year looks a bit like an engineering prize, because it was given to three researchers – Martin Karplus, Michael Levitt, and Arieh Warshel – “for the development of multiscale models of complex chemical systems” that underlie the computer programs that make nuclear magnetic resonance hardware work. But the Nobel Foundation is forced to look at much more such practical, applied material when it considers the economics prize.

The problem is that, once we focus on economic policy, much that is not science comes into play. Politics becomes involved, and political posturing is amply rewarded by public attention. The Nobel Prize is designed to reward those who do not play tricks for attention, and who, in their sincere pursuit of the truth, might otherwise be slighted.
 

The pursuit of truth


Why is it called a prize in “economic sciences”, rather than just “economics”? The other prizes are not awarded in the “chemical sciences” or the “physical sciences”.

 

Fields of endeavor that use “science” in their titles tend to be those that get masses of people emotionally involved and in which crackpots seem to have some purchase on public opinion. These fields have “science” in their names to distinguish them from their disreputable cousins.

The term political science first became popular in the late eighteenth century to distinguish it from all the partisan tracts whose purpose was to gain votes and influence rather than pursue the truth. Astronomical science was a common term in the late nineteenth century, to distinguish it from astrology and the study of ancient myths about the constellations. Hypnotic science was also used in the nineteenth century to distinguish the scientific study of hypnotism from witchcraft or religious transcendentalism.
 

Crackpot counterparts


There was a need for such terms back then, because their crackpot counterparts held much greater sway in general discourse. Scientists had to announce themselves as scientists.

 

In fact, even the term chemical science enjoyed some popularity in the nineteenth century – a time when the field sought to distinguish itself from alchemy and the promotion of quack nostrums. But the need to use that term to distinguish true science from the practice of impostors was already fading by the time the Nobel Prizes were launched in 1901.

Similarly, the terms astronomical science and hypnotic science mostly died out as the twentieth century progressed, perhaps because belief in the occult waned in respectable society. Yes, horoscopes still persist in popular newspapers, but they are there only for the severely scientifically challenged, or for entertainment; the idea that the stars determine our fate has lost all intellectual currency. Hence there is no longer any need for the term “astronomical science.”
 

Pseudoscience?


Critics of “economic sciences” sometimes refer to the development of a “pseudoscience” of economics, arguing that it uses the trappings of science, like dense mathematics, but only for show. For example, in his 2004 book,
 Fooled by Randomness, Nassim Nicholas Taleb said of economic sciences:
 
“You can disguise charlatanism under the weight of equations, and nobody can catch you since there is no such thing as a controlled experiment.”

But physics is not without such critics, too. In his 2004 book,
The Trouble with Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next, Lee Smolin reproached the physics profession for being seduced by beautiful and elegant theories (notably string theory) rather than those that can be tested by experimentation. Similarly, in his 2007 book, Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law, Peter Woit accused physicists of much the same sin as mathematical economists are said to commit.


 

Exposing the charlatans


My belief is tha
t economics is somewhat more vulnerable than the physical sciences to models whose validity will never be clear, because the necessity for approximation is much stronger than in the physical sciences, especially given that the models describe people rather than magnetic resonances or fundamental particles. People can just change their minds and behave completely differently. They even have neuroses and identity problems - complex phenomena that the field of behavioral economics is finding relevant to understand economic outcomes.

 

But all the mathematics in economics is not, as Taleb suggests, charlatanism. Economics has an important quantitative side, which cannot be escaped. The challenge has been to combine its mathematical insights with the kinds of adjustments that are needed to make its models fit the economy’s irreducibly human element.

The advance of behavioral economics is not fundamentally in conflict with mathematical economics, as some seem to think, though it may well be in conflict with some currently fashionable mathematical economic models. And, while economics presents its own methodological problems, the basic challenges facing researchers are not fundamentally different from those faced by researchers in other fields. As economics develops, it will broaden its repertory of methods and sources of evidence, the science will become stronger, and the charlatans will be exposed.

 

 


"A Pragmatist Defence of Classical Financial Accounting Research," by Brian A. Rutherford, Abacus, Volume 49, Issue 2, pages 197–218, June 2013 ---
http://onlinelibrary.wiley.com/doi/10.1111/abac.12003/abstract

The reason for the disdain in which classical financial accounting research has come to held by many in the scholarly community is its allegedly insufficiently scientific nature. While many have defended classical research or provided critiques of post-classical paradigms, the motivation for this paper is different. It offers an epistemologically robust underpinning for the approaches and methods of classical financial accounting research that restores its claim to legitimacy as a rigorous, systematic and empirically grounded means of acquiring knowledge. This underpinning is derived from classical philosophical pragmatism and, principally, from the writings of John Dewey. The objective is to show that classical approaches are capable of yielding serviceable, theoretically based solutions to problems in accounting practice.

Jensen Comment
When it comes to "insufficient scientific nature" of classical accounting research I should note yet once again that accountics science never attained the status of real science where the main criteria are scientific searches for causes and an obsession with replication (reproducibility) of findings.

Accountics science is overrated because it only achieved the status of a psuedo science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science

"Research on Accounting Should Learn From the Past" by Michael H. Granof and Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

The unintended consequence has been that interesting and researchable questions in accounting are essentially being ignored. By confining the major thrust in research to phenomena that can be mathematically modeled or derived from electronic databases, academic accountants have failed to advance the profession in ways that are expected of them and of which they are capable.

Academic research has unquestionably broadened the views of standards setters as to the role of accounting information and how it affects the decisions of individual investors as well as the capital markets. Nevertheless, it has had scant influence on the standards themselves.

Continued in article

"Research on Accounting Should Learn From the Past," by Michael H. Granof and
 Stephen A. Zeff, Chronicle of Higher Education, March 21, 2008

. . .

The narrow focus of today's research has also resulted in a disconnect between research and teaching. Because of the difficulty of conducting publishable research in certain areas — such as taxation, managerial accounting, government accounting, and auditing — Ph.D. candidates avoid choosing them as specialties. Thus, even though those areas are central to any degree program in accounting, there is a shortage of faculty members sufficiently knowledgeable to teach them.

To be sure, some accounting research, particularly that pertaining to the efficiency of capital markets, has found its way into both the classroom and textbooks — but mainly in select M.B.A. programs and the textbooks used in those courses. There is little evidence that the research has had more than a marginal influence on what is taught in mainstream accounting courses.

What needs to be done? First, and most significantly, journal editors, department chairs, business-school deans, and promotion-and-tenure committees need to rethink the criteria for what constitutes appropriate accounting research. That is not to suggest that they should diminish the importance of the currently accepted modes or that they should lower their standards. But they need to expand the set of research methods to encompass those that, in other disciplines, are respected for their scientific standing. The methods include historical and field studies, policy analysis, surveys, and international comparisons when, as with empirical and analytical research, they otherwise meet the tests of sound scholarship.

Second, chairmen, deans, and promotion and merit-review committees must expand the criteria they use in assessing the research component of faculty performance. They must have the courage to establish criteria for what constitutes meritorious research that are consistent with their own institutions' unique characters and comparative advantages, rather than imitating the norms believed to be used in schools ranked higher in magazine and newspaper polls. In this regard, they must acknowledge that accounting departments, unlike other business disciplines such as finance and marketing, are associated with a well-defined and recognized profession. Accounting faculties, therefore, have a special obligation to conduct research that is of interest and relevance to the profession. The current accounting model was designed mainly for the industrial era, when property, plant, and equipment were companies' major assets. Today, intangibles such as brand values and intellectual capital are of overwhelming importance as assets, yet they are largely absent from company balance sheets. Academics must play a role in reforming the accounting model to fit the new postindustrial environment.

Third, Ph.D. programs must ensure that young accounting researchers are conversant with the fundamental issues that have arisen in the accounting discipline and with a broad range of research methodologies. The accounting literature did not begin in the second half of the 1960s. The books and articles written by accounting scholars from the 1920s through the 1960s can help to frame and put into perspective the questions that researchers are now studying.

Continued in article

How accountics scientists should change ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm


Those Big Flubs versus Those Winning Discoveries in the Evolution of Scientific Knowledge Over Time
Book Reviews of "Science, Right and Wrong: The evolution of knowledge"
by Sam Kean
American Scholar
Summer 2013
http://theamericanscholar.org/science-right-and-wrong/#.Uga0521sjmt

Curiosity: How Science Became Interested in Everything, By Philip Ball, University of Chicago Press, 465 pp., $35

Brilliant Blunders: From Darwin to Einstein—Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe, 
By Mario Livio, Simon & Schuster, 341 pp., $26

Aristotle called it aimless and witless. St. Augustine condemned it as a disease. The ancient Greeks blamed it for Pandora’s unleashing destruction on the world. And one early Christian leader even pinned the fall of Lucifer himself on idle, intemperate, unrestrained curiosity.

Today, the exploration of new places and new ideas seems self-evidently a good thing. For much of human history, though, priests, politicians, and philosophers cast a suspicious eye on curious folks. It wasn’t just that staring at rainbows all day or pulling apart insects’ wings seemed weird, even childish. It also represented a colossal waste of time, which could be better spent building the economy or reading the Bible. Philip Ball explains in his thought-provoking new book, Curiosity, that only in the 1600s did society start to sanction (or at least tolerate) the pursuit of idle interests. And as much as any other factor, Ball argues, that shift led to the rise of modern science.

We normally think about the early opposition to science as simple religious bias. But “natural philosophy” (as science was then known) also faced serious philosophical objections, especially about the trustworthiness of the knowledge obtained. For instance, Galileo used a telescope to discover both the craters on our moon and the existence of moons orbiting Jupiter. These discoveries demonstrated, contra the ancient Greeks, that not all heavenly bodies were perfect spheres and that not all of them orbited Earth. Galileo’s conclusions, however, relied on a huge assumption—that his telescope provided a true picture of the heavens. How could he know, his critics protested, that optical instruments didn’t garble or distort as much as they revealed? It’s a valid point.

Another debate revolved around what now seems like an uncontroversial idea: that scientists should perform experiments. The sticking point was that experiments, almost by definition, explore nature under artificial conditions. But if you want to understand nature, shouldn’t the conditions be as natural as possible—free from human interference? Perhaps the results of experiments were no more reliable than testimony extracted from witnesses under torture.

Specific methods aside, critics argued that unregulated curiosity led to an insatiable desire for novelty—not to true knowledge, which required years of immersion in a subject. Today, in an ever-more-distracted world, that argument resonates. In fact, even though many early critics of natural philosophy come off as shrill and small-minded, it’s a testament to Ball that you occasionally find yourself nodding in agreement with people who ended up on the “wrong” side of history.

Ultimately, Curiosity is a Big Ideas book. Although Newton, Galileo, and others play important roles, Ball wants to provide a comprehensive account of early natural philosophy, and that means delving into dozens of other, minor thinkers. In contrast, Mario Livio’s topsy-turvy book, Brilliant Blunders, focuses on Big Names in science history. It’s a telling difference that whereas Ball’s book, like a Russian novel, needs an appendix with a cast of characters, Livio’s characters usually go by one name—Darwin, Kelvin, Pauling, Hoyle, and Einstein.

Livio’s book is topsy-turvy because, rather than repeat the obvious—these were some smart dudes—he examines infamous mistakes they made. He also indulges in some not always convincing armchair psychology to determine how each man’s temperament made him prone to commit the errors he did.

For those of us who, when reading about such thinkers, can’t help but compare our own pitiful intellects with theirs, this focus on mistakes is both encouraging and discouraging. It’s encouraging because their mistakes remind us that they were fallible, full of the same blind spots and foibles we all have. It’s discouraging because, even at their dumbest, these scientists did incredible work. Indeed, Livio argues that their “brilliant blunders” ended up benefiting science overall.

Take Kelvin’s error. During William Thomson Kelvin’s heyday in the later 1800s, various groups of scientists had an enormous row over the age of Earth, in large part because Darwin’s theory of natural selection seemed to require eons upon eons of time. Unfortunately, geologists provided little clarity here: they could date fossils and rock strata only relatively, not absolutely, so their estimates varied wildly. Into this vacuum stepped Kelvin, a mathematical physicist who studied heat. Kelvin knew that Earth had probably been a hot, molten liquid in the past. So if he could determine Earth’s initial temperature, its current temperature, and its rate of cooling, he could calculate its age. His initial estimate was 20 million years.

Continued in book reviews


June 5, 2013 reply to a long thread by Bob Jensen

Hi Steve,

As usual, these AECM threads between you, me, and Paul Williams resolve nothing to date. TAR still has zero articles without equations unless such articles are forced upon editors like the Kaplan article was forced upon you as Senior Editor. TAR still has no commentaries about the papers it publishes and the authors make no attempt to communicate and have dialog about their research on the AECM or the AAA Commons.

I do hope that our AECM threads will continue and lead one day to when the top academic research journals do more to both encourage (1) validation (usually by speedy replication), (2) alternate methodologies, (3) more innovative research, and (4) more interactive commentaries.

I remind you that Professor Basu's essay is only one of four essays bundled together in Accounting Horizons on the topic of how to make accounting research, especially the so-called Accounting Sciience or Accountics Science or Cargo Cult science, more innovative.

The four essays in this bundle are summarized and extensively quoted at http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Essays 

I will try to keep drawing attention to these important essays and spend the rest of my professional life trying to bring accounting research closer to the accounting profession.

I also want to dispel the myth that accountics research is harder than making research discoveries without equations. The hardest research I can imagine (and where I failed) is to make a discovery that has a noteworthy impact on the accounting profession. I always look but never find such discoveries reported in TAR.

The easiest research is to purchase a database and beat it with an econometric stick until something falls out of the clouds. I've searched for years and find very little that has a noteworthy impact on the accounting profession. Quite often there is a noteworthy impact on other members of the Cargo Cult and doctoral students seeking to beat the same data with their sticks. But try to find a practitioner with an interest in these academic accounting discoveries?

Our latest thread leads me to such questions as:

  1. Is accounting research of inferior quality relative to other disciplines like engineering and finance?

     
  2. Are there serious innovation gaps in academic accounting research?

     
  3. Is accounting research stagnant?

     
  4. How can accounting researchers be more innovative?

     
  5. Is there an "absence of dissent" in academic accounting research?

     
  6. Is there an absence of diversity in our top academic accounting research journals and doctoral programs?

     
  7. Is there a serious disinterest (except among the Cargo Cult) and lack of validation in findings reported in our academic accounting research journals, especially TAR?

     
  8. Is there a huge communications gap between academic accounting researchers and those who toil teaching accounting and practicing accounting?

     
  9. Why do our accountics scientists virtually ignore the AECM and the AAA Commons and the Pathways Commission Report?
    http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm

One fall out of this thread is that I've been privately asked to write a paper about such matters. I hope that others will compete with me in thinking and writing about these serious challenges to academic accounting research that never seem to get resolved.

Thank you Steve for sometimes responding in my threads on such issues in the AECM.

Respectfully,
Bob Jensen

 

June 16, 2013 message from Bob Jensen

Hi Marc,

The mathematics of falsification is essentially the same as the mathematics of proof negation.
 
If mathematics is a science it's largely a science of counter examples.
 
Regarding real-real science versus pseudo-science, one criterion is that of explanation (not just prediction)  that satisfies a community of scholars. One of the best examples of this are the exchanges between two Nobel economists --- Milton Friedman versus Herb Simon.
 

From
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Jensen Comment
Interestingly, two Nobel economists slugged out the very essence of theory some years back. Herb Simon insisted that the purpose of theory was to explain. Milton Friedman went off on the F-Twist tangent saying that it was enough if a theory merely predicted. I lost some (certainly not all) respect for Friedman over this. Deidre, who knew Milton, claims that deep in his heart, Milton did not ultimately believe this to the degree that it is attributed to him. Of course Deidre herself is not a great admirer of Neyman, Savage, or Fisher.

Friedman's essay "The Methodology of Positive Economics" (1953) provided the epistemological pattern for his own subsequent research and to a degree that of the Chicago School. There he argued that economics as science should be free of value judgments for it to be objective. Moreover, a useful economic theory should be judged not by its descriptive realism but by its simplicity and fruitfulness as an engine of prediction. That is, students should measure the accuracy of its predictions, rather than the 'soundness of its assumptions'. His argument was part of an ongoing debate among such statisticians as Jerzy Neyman, Leonard Savage, and Ronald Fisher.

Stanley Wong, 1973. "The 'F-Twist' and the Methodology of Paul Samuelson," American Economic Review, 63(3) pp. 312-325. Reprinted in J.C. Wood & R.N. Woods, ed., 1990, Milton Friedman: Critical Assessments, v. II, pp. 224- 43.
http://www.jstor.org/discover/10.2307/1914363?uid=3739712&uid=2129&uid=2&uid=70&uid=4&uid=3739256&sid=21102409988857
 

Respectfully,
 
Bob Jensen

June 18, 2013 reply to David Johnstone by Jagdish Gangolly

David,

Your call for a dialogue between statistics and philosophy of science is very timely, and extremely important considering the importance that statistics, both in its probabilistic and non-probabilistic incarnations, has gained ever since the computational advances of the past three decades or so. Let me share a few of my conjectures regarding the cause of this schism between statistics and philosophy, and consider a few areas where they can share in mutual reflection. However, reflection in statistics, like in accounting of late and unlike in philosophy, has been on short order for quite a while. And it is always easier to pick the low hanging fruit. Albert Einstein once remarked, ""I have little patience with scientists who take a board of wood, look for the thinnest part and drill a great number of holes where drilling is easy".

1.

Early statisticians were practitioners of the art, most serving as consultants of sorts. Gosset worked for Guiness, GEP Box did most of his early work for Imperial Chemical Industries (ICI), Fisher worked at Rothamsted Experimental Station, Loeve was an actuary at University of Lyon... As practitioners, statisticians almost always had their feet in one of the domains in science: Fisher was a biologist, Gossett was a chemist, Box was a chemist, ... Their research was down to earth, and while statistics was always regarded the turf of mathematicians, their status within mathematics was the same as that of accountants in liberal arts colleges today, slightly above that of athletics. Of course, the individuals with stature were expected to be mathematicians in their own right.

All that changed with the work of Kolmogorov (1933, Moscow State, http://www.socsci.uci.edu/~bskyrms/bio/readings/kolmogorov_theory_of_probability_small.pdf), Loeve (1960, Berkeley), Doob(1953, Illinois), and Dynkin(1963, Moscow State and Cornell). They provided mathematical foundations for earlier work of practitioners, and especially Kolmogorov provided axiomatic foundations for probability theory. In the process, their work unified statistics into a coherent mass of knowledge. (Perhaps there is a lesson here for us accountants). A collateral effect was the schism in the field between the theoreticians and the practitioners (of which we accountants must be wary) that has continued to this date. We can see a parallel between accounting and statistics here too.

2.

Early controversies in statistics had to do with embedding statistical methods in decision theory (Fisher was against, Neyman and Pearson were for it), and whether the foundations for statistics had to be deductive or inductive (frequentists were for the former, Bayesians were for the latter). These debates were not just technical, and had underpinnings in philosophy, especially philosophy of mathematics (after all, the early contributors to the field were mathematicians: Gauss, Fermat, Pascal, Laplace, deMoivre, ...). For example, when the Fisher-Neyman/Pearson debates had ranged, Neyman was invited by the philosopher Jakko Hintikka to write a paper for the journal Synthese ( "Frequentist probability and Frequentist statistics", 1977).

3.

Since the early statisticians were practitioners, their orientation was usually normative: in sample theory, regression, design of experiments,.... The mathematisation of statistics and later work of people like Tukey, raised the prominence of descriptive (especially axiomatic) in the field. However, the recent developments in datamining have swung the balance again in favour of the normative.

4. Foundational issues in statistics have always been philosophical. And treatment of probability has been profoundly philosophical (see for example http://en.wikipedia.org/wiki/Probability_interpretations).

Regards,

Jagdish

June 18, 2018 reply from David Johnstone

Dear Jagdish, as usual your knowledge and perspectives are great to read.

In reply to your points: (1) the early development of statistics by Gossett and Fisher was as a means to an end, i.e. to design and interpret experiments that helped to resolve practical issues, like whether fertilizers were effective and different genetic strains of crops were superior. This left results testable in the real world laboratory, by the farmers, so the pressure to get it right rather than just publish was on. Gossett by the way was an old fashioned English scholar who spent as much time fishing and working in his workshop as doing mathematics. This practical bent comes out in his work.

(2) Neman’s effort to make statistics “deductive” was always his weak point, and he went to great lengths to evade this issue. I wrote a paper on Neyman’s interpretations of tests, as in trying to understand him I got frustrated by his inconsistency and evasiveness over his many papers. In more than one place, he wrote that to “accept” the null is to “act as if it is true”, and to reject it is to “act as if it is false”. This is ridiculous in scientific contexts, since to act as if something is decided 100% you would never draw another sample - your work would be done on that hypothesis.

(3) On the issue of normative versus descriptive, as in accounting research, Harold Jeffreys had a great line in his book, “he said that if we observe a child add 2 and 2 to get 5, we don’t change the laws of arithmetic”. He was very anti learning about the world by watching people rather than doing abstract theory. BTW I own his personal copy of his 3rd edition. A few years ago I went to buy this book on Bookfinder, and found it available in a secondhand bookshop in Cambridge. I rand them instantly when I saw that they said whose book it was, and they told me that Mrs Jeffreys had just died and Harold’s books had come in, and that the 1st edition was sold the day before.

(4) I adore your line that “Foundational issues in statistics have always been philosophical”. .... So must they be in accounting, in relation to how to construct income and net assets measures that are sound and meaningful. Note however that just because we accept something needs philosophical footing doesn’t mean that we will find or agree on that footing. I recently received a comment on a paper of mine from an accounting referee. The comment was basically that the effect of information on the cost of capital “could not be revealed by philosophy” (i.e. by probability theory etc.). Rather, this is an empirical issue. Apart from ignoring all the existing theory on this matter in accounting and finance, the comment is symptomatic of the way that “empirical findings” have been elevated to the top shelf, and theory, or worse, “thought pieces”, are not really science. There is so much wrong with this extreme but common view, including of course that every empirical finding stands on a model or a priori view. Indeed, remember that every null hypothesis that was ever rejected might have been rejected because the model (not the hypothesis) was wrong. People naively believe that a bad model or bad experimental design just reduces power (makes it harder to reject the null) but the mathematical fact is that it can go either way, and error in the model or sample design can make rejection of the null almost certain.

Thank you for your interesting thoughts Jagdish,

David

From Bob Jensen's threads on the Cult of Statistical Significance ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm
 

The Cult of Statistical Significance: How Standard Error Costs Us Jobs, Justice, and Lives ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

Page 15
The doctor who cannot distinguish statistical significance from substantive significance, an F-statistic from a heart attach, is like an economist who ignores opportunity cost---what statistical theorists call the loss function. The doctors of "significance" in medicine and economy are merely "deciding what to say rather than what to do" (Savage 1954, 159). In the 1950s Ronald Fisher published an article and a book that intended to rid decision from the vocabulary of working statisticians (1955, 1956). He was annoyed by the rising authority in highbrow circles of those he called "the Neymanites."

Continued on Page 15


pp. 28-31
An example is provided regarding how Merck manipulated statistical inference to keep its killing pain killer Vioxx from being pulled from the market.

Page 31
Another story. The Japanese government in June 2005 increased the limit on the number of whales that may be annually killed in the Antarctica---from around 440 annually to over 1,000 annually. Deputy Commissioner Akira Nakamae explained why:  "We will implement JARPS-2 [the plan for the higher killing] according to the schedule, because the sample size is determined in order to get statistically significant results" (Black 2005). The Japanese hunt for the whales, they claim, in order to collect scientific data on them. That and whale steaks. The commissioner is right:  increasing sample size, other things equal, does increase the statistical significance of the result. It is, fter all, a mathematical fact that statistical significance increases, other things equal, as sample size increases. Thus the theoretical standard error of JAEPA-2, s/SQROOT(440+560) [given for example the simple mean formula], yields more sampling precision than the standard error JARPA-1, s/SQROOT(440). In fact it raises the significance level to Fisher's percent cutoff. So the Japanese government has found a formula for killing more whales, annually some 560 additional victims, under the cover of getting the conventional level of Fisherian statistical significance for their "scientific" studies.


pp. 250-251
The textbooks are wrong. The teaching is wrong. The seminar you just attended is wrong. The most prestigious journal in your scientific field is wrong.

You are searching, we know, for ways to avoid being wrong. Science, as Jeffreys said, is mainly a series of approximations to discovering the sources of error. Science is a systematic way of reducing wrongs or can be. Perhaps you feel frustrated by the random epistemology of the mainstream and don't know what to do. Perhaps you've been sedated by significance and lulled into silence. Perhaps you sense that the power of a Roghamsted test against a plausible Dublin alternative is statistically speaking low but you feel oppressed by the instrumental variable one should dare not to wield. Perhaps you feel frazzled by what Morris Altman (2004) called the "social psychology rhetoric of fear," the deeply embedded path dependency that keeps the abuse of significance in circulation. You want to come out of it. But perhaps you are cowed by the prestige of Fisherian dogma. Or, worse thought, perhaps you are cynically willing to be corrupted if it will keep a nice job

 

Multicollinearity --- http://en.wikipedia.org/wiki/Multicollinearity

Question
When we took econometrics we learned that predictor variable independence was good and interdependence was bad, especially higher ordered complicated interdependencies?
 

"Can You Actually TEST for Multicollinearity?" --- Click Here
http://davegiles.blogspot.com/2013/06/can-you-actually-test-for.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2FjjOHE+%28Econometrics+Beat%3A+Dave+Giles%27+Blog%29

. . .

Now, let's return to the "problem" of multicollinearity.

 
What do we mean by this term, anyway? This turns out to be the key question!

 
Multicollinearity is a phenomenon associated with our particular sample of data when we're trying to estimate a regression model. Essentially, it's a situation where there is insufficient information in the sample of data to enable us to enable us to draw "reliable" inferences about the individual parameters of the underlying (population) model.

I'll be elaborating more on the "informational content" aspect of this phenomenon in a follow-up post. Yes, there are various sample measures that we can compute and report, to help us gauge how severe this data "problem" may be. But they're not statistical tests, in any sense of the word

Because multicollinearity is a characteristic of the sample, and not a characteristic of the population, you should immediately be suspicious when someone starts talking about "testing for multicollinearity". Right?

Apparently not everyone gets it!

There's an old paper by Farrar and Glauber (1967) which, on the face of it might seem to take a different stance. In fact, if you were around when this paper was published (or if you've bothered to actually read it carefully), you'll know that this paper makes two contributions. First, it provides a very sensible discussion of what multicollinearity is all about. Second, the authors take some well known results from the statistics literature (notably, by Wishart, 1928; Wilks, 1932; and Bartlett, 1950) and use them to give "tests" of the hypothesis that the regressor matrix, X, is orthogonal.

How can this be? Well, there's a simple explanation if you read the Farrar and Glauber paper carefully, and note what assumptions are made when they "borrow" the old statistics results. Specifically, there's an explicit (and necessary) assumption that in the population the X matrix is random, and that it follows a multivariate normal distribution.

This assumption is, of course totally at odds with what is usually assumed in the linear regression model! The "tests" that Farrar and Glauber gave us aren't really tests of multicollinearity in the sample. Unfortunately, this point wasn't fully appreciated by everyone.

There are some sound suggestions in this paper, including looking at the sample multiple correlations between each regressor, and all of the other regressors. These, and other sample measures such as variance inflation factors, are useful from a diagnostic viewpoint, but they don't constitute tests of "zero multicollinearity".

So, why am I even mentioning the Farrar and Glauber paper now?

Well, I was intrigued to come across some STATA code (Shehata, 2012) that allows one to implement the Farrar and Glauber "tests". I'm not sure that this is really very helpful. Indeed, this seems to me to be a great example of applying someone's results without understanding (bothering to read?) the assumptions on which they're based!

Be careful out there - and be highly suspicious of strangers bearing gifts!


 
References

 
Bartlett, M. S., 1950. Tests of significance in factor analysis. British Journal of Psychology, Statistical Section, 3, 77-85.

 
Farrar, D. E. and R. R. Glauber, 1967. Multicollinearity in regression analysis: The problem revisited.  Review of Economics and Statistics, 49, 92-107.

 
Shehata, E. A. E., 2012. FGTEST: Stata module to compute Farrar-Glauber Multicollinearity Chi2, F, t tests.

Wilks, S. S., 1932. Certain generalizations in the analysis of variance. Biometrika, 24, 477-494.

Wishart, J., 1928. The generalized product moment distribution in samples from a multivariate normal population. Biometrika, 20A, 32-52.

 

Multicollinearity --- http://en.wikipedia.org/wiki/Multicollinearity

Detection of multicollinearity

Indicators that multicollinearity may be present in a model:

  1. Large changes in the estimated regression coefficients when a predictor variable is added or deleted
  2. Insignificant regression coefficients for the affected variables in the multiple regression, but a rejection of the joint hypothesis that those coefficients are all zero (using an F-test)
  3. If a multivariate regression finds an insignificant coefficient of a particular explanator, yet a simple linear regression of the explained variable on this explanatory variable shows its coefficient to be significantly different from zero, this situation indicates multicollinearity in the multivariate regression.
  4. Some authors have suggested a formal detection-tolerance or the variance inflation factor (VIF) for multicollinearity:
    \mathrm{tolerance} = 1-R_{j}^2,\quad \mathrm{VIF} = \frac{1}{\mathrm{tolerance}},
    where R_{j}^2 is the coefficient of determination of a regression of explanator j on all the other explanators. A tolerance of less than 0.20 or 0.10 and/or a VIF of 5 or 10 and above indicates a multicollinearity problem (but see O'Brien 2007).[1]
  5. Condition Number Test: The standard measure of ill-conditioning in a matrix is the condition index. It will indicate that the inversion of the matrix is numerically unstable with finite-precision numbers ( standard computer floats and doubles ). This indicates the potential sensitivity of the computed inverse to small changes in the original matrix. The Condition Number is computed by finding the square root of (the maximum eigenvalue divided by the minimum eigenvalue). If the Condition Number is above 30, the regression is said to have significant multicollinearity.
  6. Farrar-Glauber Test:[2] If the variables are found to be orthogonal, there is no multicollinearity; if the variables are not orthogonal, then multicollinearity is present.
  7. Construction of a correlation matrix among the explanatory variables will yield indications as to the likelihood that any given couplet of right-hand-side variables are creating multicollinearity problems. Correlation values (off-diagonal elements) of at least .4 are sometimes interpreted as indicating a multicollinearity problem.

Consequences of multicollinearity

As mentioned above, one consequence of a high degree of multicollinearity is that, even if the matrix XTX is invertible, a computer algorithm may be unsuccessful in obtaining an approximate inverse, and if it does obtain one it may be numerically inaccurate. But even in the presence of an accurate XTX matrix, the following consequences arise:

In the presence of multicollinearity, the estimate of one variable's impact on the dependent variable Y while controlling for the others tends to be less precise than if predictors were uncorrelated with one another. The usual interpretation of a regression coefficient is that it provides an estimate of the effect of a one unit change in an independent variable, X_{1}, holding the other variables constant. If X_{1} is highly correlated with another independent variable, X_{2}, in the given data set, then we have a set of observations for which X_{1} and X_{2} have a particular linear stochastic relationship. We don't have a set of observations for which all changes in X_{1} are independent of changes in X_{2}, so we have an imprecise estimate of the effect of independent changes in X_{1}.

In some sense, the collinear variables contain the same information about the dependent variable. If nominally "different" measures actually quantify the same phenomenon then they are redundant. Alternatively, if the variables are accorded different names and perhaps employ different numeric measurement scales but are highly correlated with each other, then they suffer from redundancy.

One of the features of multicollinearity is that the standard errors of the affected coefficients tend to be large. In that case, the test of the hypothesis that the coefficient is equal to zero may lead to a failure to reject a false null hypothesis of no effect of the explanator.

A principal danger of such data redundancy is that of overfitting in regression analysis models. The best regression models are those in which the predictor variables each correlate highly with the dependent (outcome) variable but correlate at most only minimally with each other. Such a model is often called "low noise" and will be statistically robust (that is, it will predict reliably across numerous samples of variable sets drawn from the same statistical population).

So long as the underlying specification is correct, multicollinearity does not actually bias results; it just produces large standard errors in the related independent variables. If, however, there are other problems (such as omitted variables) which introduce bias, multicollinearity can multiply (by orders of magnitude) the effects of that bias.[citation needed] More importantly, the usual use of regression is to take coefficients from the model and then apply them to other data. If the pattern of multicollinearity in the new data differs from that in the data that was fitted, such extrapolation may introduce large errors in the predictions.[3]

Remedies for multicollinearity

  1. Make sure you have not fallen into the dummy variable trap; including a dummy variable for every category (e.g., summer, autumn, winter, and spring) and including a constant term in the regression together guarantee perfect multicollinearity.
  2. Try seeing what happens if you use independent subsets of your data for estimation and apply those estimates to the whole data set. Theoretically you should obtain somewhat higher variance from the smaller datasets used for estimation, but the expectation of the coefficient values should be the same. Naturally, the observed coefficient values will vary, but look at how much they vary.
  3. Leave the model as is, despite multicollinearity. The presence of multicollinearity doesn't affect the efficacy of extrapolating the fitted model to new data provided that the predictor variables follow the same pattern of multicollinearity in the new data as in the data on which the regression model is based.[4]
  4. Drop one of the variables. An explanatory variable may be dropped to produce a model with significant coefficients. However, you lose information (because you've dropped a variable). Omission of a relevant variable results in biased coefficient estimates for the remaining explanatory variables.
  5. Obtain more data, if possible. This is the preferred solution. More data can produce more precise parameter estimates (with lower standard errors), as seen from the formula in variance inflation factor for the variance of the estimate of a regression coefficient in terms of the sample size and the degree of multicollinearity.
  6. Mean-center the predictor variables. Generating polynomial terms (i.e., for x_1, x_1^2, x_1^3, etc.) can cause some multicolinearity if the variable in question has a limited range (e.g., [2,4]). Mean-centering will eliminate this special kind of multicollinearity. However, in general, this has no effect. It can be useful in overcoming problems arising from rounding and other computational steps if a carefully designed computer program is not used.
  7. Standardize your independent variables. This may help reduce a false flagging of a condition index above 30.
  8. It has also been suggested that using the Shapley value, a game theory tool, the model could account for the effects of multicollinearity. The Shapley value assigns a value for each predictor and assesses all possible combinations of importance.[5]
  9. Ridge regression or principal component regression can be used.
  10. If the correlated explanators are different lagged values of the same underlying explanator, then a distributed lag technique can be used, imposing a general structure on the relative values of the coefficients to be estimated.

Note that one technique that does not work in offsetting the effects of multicollinearity is orthogonalizing the explanatory variables (linearly transforming them so that the transformed variables are uncorrelated with each other): By the Frisch–Waugh–Lovell theorem, using projection matrices to make the explanatory variables orthogonal to each other will lead to the same results as running the regression with all non-orthogonal explanators included.

Examples of contexts in which multicollinearity arises

Survival analysis

Multicollinearity may represent a serious issue in survival analysis. The problem is that time-varying covariates may change their value over the time line of the study. A special procedure is recommended to assess the impact of multicollinearity on the results. See Van den Poel & Larivière (2004)[6] for a detailed discussion.

Interest rates for different terms to maturity

In various situations it might be hypothesized that multiple interest rates of various terms to maturity all influence some economic decision, such as the amount of money or some other financial asset to hold, or the amount of fixed investment spending to engage in. In this case, including these various interest rates will in general create a substantial multicollinearity problem because interest rates tend to move together. If in fact each of the interest rates has its own separate effect on the dependent variable, it can be extremely difficult to separate out their effects.


Simpson's Paradox and Cross-Validation

Simpson's Paradox --- http://en.wikipedia.org/wiki/Simpson%27s_paradox

"Simpson’s Paradox: A Cautionary Tale in Advanced Analytics," by Steve Berman, Leandro DalleMule, Michael Greene, and John Lucker, Significance:  Statistics Making Sense, October 2012 ---
http://www.significancemagazine.org/details/webexclusive/2671151/Simpsons-Paradox-A-Cautionary-Tale-in-Advanced-Analytics.html

Analytics projects often present us with situations in which common sense tells us one thing, while the numbers seem to tell us something much different. Such situations are often opportunities to learn something new by taking a deeper look at the data. Failure to perform a sufficiently nuanced analysis, however, can lead to misunderstandings and decision traps. To illustrate this danger, we present several instances of Simpson’s Paradox in business and non-business environments. As we demonstrate below, statistical tests and analysis can be confounded by a simple misunderstanding of the data. Often taught in elementary probability classes, Simpson’s Paradox refers to situations in which a trend or relationship that is observed within multiple groups reverses when the groups are combined. Our first example describes how Simpson’s Paradox accounts for a highly surprising observation in a healthcare study. Our second example involves an apparent violation of the law of supply and demand: we describe a situation in which price changes seem to bear no relationship with quantity purchased. This counterintuitive relationship, however, disappears once we break the data into finer time periods. Our final example illustrates how a naive analysis of marginal profit improvements resulting from a price optimization project can potentially mislead senior business management, leading to incorrect conclusions and inappropriate decisions. Mathematically, Simpson’s Paradox is a fairly simple—if counterintuitive—arithmetic phenomenon. Yet its significance for business analytics is quite far-reaching. Simpson’s Paradox vividly illustrates why business analytics must not be viewed as a purely technical subject appropriate for mechanization or automation. Tacit knowledge, domain expertise, common sense, and above all critical thinking, are necessary if analytics projects are to reliably lead to appropriate evidence-based decision making.

The past several years have seen decision making in many areas of business steadily evolve from judgment-driven domains into scientific domains in which the analysis of data and careful consideration of evidence are more prominent than ever before. Additionally, mainstream books, movies, alternative media and newspapers have covered many topics describing how fact and metric driven analysis and subsequent action can exceed results previously achieved through less rigorous methods. This trend has been driven in part by the explosive growth of data availability resulting from Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) applications and the Internet and eCommerce more generally. There are estimates that predict that more data will be created in the next four years than in the history of the planet. For example, Wal-Mart handles over one million customer transactions every hour, feeding databases estimated at more than 2.5 petabytes in size - the equivalent of 167 times the books in the United States Library of Congress.

Additionally, computing power has increased exponentially over the past 30 years and this trend is expected to continue. In 1969, astronauts landed on the moon with a 32-kilobyte memory computer. Today, the average personal computer has more computing power than the entire U.S. space program at that time. Decoding the human genome took 10 years when it was first done in 2003; now the same task can be performed in a week or less. Finally, a large consumer credit card issuer crunched two years of data (73 billion transactions) in 13 minutes, which not long ago took over one month.

This explosion of data availability and the advances in computing power and processing tools and software have paved the way for statistical modeling to be at the front and center of decision making not just in business, but everywhere. Statistics is the means to interpret data and transform vast amounts of raw data into meaningful information.

However, paradoxes and fallacies lurk behind even elementary statistical exercises, with the important implication that exercises in business analytics can produce deceptive results if not performed properly. This point can be neatly illustrated by pointing to instances of Simpson’s Paradox. The phenomenon is named after Edward Simpson, who described it in a technical paper in the 1950s, though the prominent statisticians Karl Pearson and Udney Yule noticed the phenomenon over a century ago. Simpson’s Paradox, which regularly crops up in statistical research, business analytics, and public policy, is a prime example of why statistical analysis is useful as a corrective for the many ways in which humans intuit false patterns in complex datasets.

Simpson’s Paradox is in a sense an arithmetic trick: weighted averages can lead to reversals of meaningful relationships—i.e., a trend or relationship that is observed within each of several groups reverses when the groups are combined. Simpson’s Paradox can arise in any number of marketing and pricing scenarios; we present here case studies describing three such examples. These case studies serve as cautionary tales: there is no comprehensive mechanical way to detect or guard against instances of Simpson’s Paradox leading us astray. To be effective, analytics projects should be informed by both a nuanced understanding of statistical methodology as well as a pragmatic understanding of the business being analyzed.

The first case study, from the medical field, presents a surface indication on the effects of smoking that is at odds with common sense. Only when the data are viewed at a more refined level of analysis does one see the true effects of smoking on mortality. In the second case study, decreasing prices appear to be associated with decreasing sales and increasing prices appear to be associated with increasing sales. On the surface, this makes no sense. A fundamental tenet of economics is that of the demand curve: as the price of a good or service increases, consumers demand less of it. Simpson’s Paradox is responsible for an apparent—though illusory—violation of this fundamental law of economics. Our final case study shows how marginal improvements in profitability in each of the sales channels of a given manufacturer may result in an apparent marginal reduction in the overall profitability the business. This seemingly contradictory conclusion can also lead to serious decision traps if not properly understood.

Case Study 1: Are those warning labels really necessary?

We start with a simple example from the healthcare world. This example both illustrates the phenomenon and serves as a reminder that it can appear in any domain.

The data are taken from a 1996 follow-up study from Appleton, French, and Vanderpump on the effects of smoking. The follow-up catalogued women from the original study, categorizing based on the age groups in the original study, as well as whether the women were smokers or not. The study measured the deaths of smokers and non-smokers during the 20 year period.

Continued in article

What happened to cross-validation in accountics science research?

Over time I've become increasingly critical of the lack of validation in accountics science, and I've focused mainly upon lack of replication by independent researchers and lack of commentaries published in accountics science journals ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Another type of validation that seems to be on the decline in accountics science are the so-called cross-validations. Accountics scientists seem to be content with their statistical inference tests on Z-Scores, F-Tests, and correlation significance testing. Cross-validation seems to be less common, at least I'm having troubles finding examples of cross-validation. Cross-validation entails comparing sample findings with findings in holdout samples.

Cross Validation --- http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29

When reading the following paper using logit regression to to predict audit firm changes, it struck me that this would've been an ideal candidate for the authors to have performed cross-validation using holdout samples.
"Audit Quality and Auditor Reputation: Evidence from Japan," by Douglas J. Skinner and Suraj Srinivasan, The Accounting Review, September 2012, Vol. 87, No. 5, pp. 1737-1765.

We study events surrounding ChuoAoyama's failed audit of Kanebo, a large Japanese cosmetics company whose management engaged in a massive accounting fraud. ChuoAoyama was PwC's Japanese affiliate and one of Japan's largest audit firms. In May 2006, the Japanese Financial Services Agency (FSA) suspended ChuoAoyama for two months for its role in the Kanebo fraud. This unprecedented action followed a series of events that seriously damaged ChuoAoyama's reputation. We use these events to provide evidence on the importance of auditors' reputation for quality in a setting where litigation plays essentially no role. Around one quarter of ChuoAoyama's clients defected from the firm after its suspension, consistent with the importance of reputation. Larger firms and those with greater growth options were more likely to leave, also consistent with the reputation argument.

Jensen Comment
Rather than just use statistical inference tests on logit model Z-statistics, it struck me that in statistics journals the referees might've requested cross-validation tests on holdout samples of firms that changed auditors and firms that did not change auditors.

I do find somewhat more frequent cross-validation studies in finance, particularly in the areas of discriminant analysis in bankruptcy prediction modes.

Instances of cross-validation in accounting research journals seem to have died out in the past 20 years. There are earlier examples of cross-validation in accounting research journals. Several examples are cited below:

"A field study examination of budgetary participation and locus of control," by  Peter Brownell, The Accounting Review, October 1982 ---
http://www.jstor.org/discover/10.2307/247411?uid=3739712&uid=2&uid=4&uid=3739256&sid=21101146090203

"Information choice and utilization in an experiment on default prediction," Abdel-Khalik and KM El-Sheshai - Journal of Accounting Research, 1980 ---
http://www.jstor.org/discover/10.2307/2490581?uid=3739712&uid=2&uid=4&uid=3739256&sid=21101146090203

"Accounting ratios and the prediction of failure: Some behavioral evidence," by Robert Libby, Journal of Accounting Research, Spring 1975 ---
http://www.jstor.org/discover/10.2307/2490653?uid=3739712&uid=2&uid=4&uid=3739256&sid=21101146090203

There are other examples of cross-validation in the 1970s and 1980s, particularly in bankruptcy prediction.

I have trouble finding illustrations of cross-validation in the accounting research literature in more recent years. Has the interest in cross-validating waned along with interest in validating accountics research? Or am I just being careless in my search for illustrations?


"SLAVES TO THE ALGORITHM:  More and more of modern life is steered by algorithms. But what are they exactly, and who is behind them? Tom Whipple follows the trail, Intelligent Life Magazine, May/June 2013 ---
http://moreintelligentlife.com/content/features/anonymous/slaves-algorithm?page=full

There are many reasons to believe that film stars earn too much. Brad Pitt and Angelina Jolie once hired an entire train to travel from London to Glasgow. Tom Cruise’s daughter Suri is reputed to have a wardrobe worth $400,000. Nicolas Cage once paid $276,000 for a dinosaur head. He would have got it for less, but he was bidding against Leonardo DiCaprio.

Nick Meaney has a better reason for believing that the stars are overpaid: his algorithm tells him so. In fact, he says, with all but one of the above actors, the studios are almost certainly wasting their money. Because, according to his movie-analysis software, there are only three actors who make money for a film. And there is at least one A-list actress who is worth paying not to star in your next picture. 

The headquarters of Epagogix, Meaney’s company, do not look like the sort of headquarters from which one would confidently launch an attack on Hollywood royalty. A few attic rooms in a shared south London office, they don’t even look as if they would trouble Dollywood. But my meeting with Meaney will be cut short because of another he has, with two film executives. And at the end, he will ask me not to print the full names of his analysts, or his full address. He is worried that they could be poached.

Worse though, far worse, would be if someone in Hollywood filched his computer. It is here that the iconoclasm happens. When Meaney is given a job by a studio, the first thing he does is quantify thousands of factors, drawn from the script. Are there clear bad guys? How much empathy is there with the protagonist? Is there a sidekick? The complex interplay of these factors is then compared by the computer to their interplay in previous films, with known box-office takings. The last calculation is what it expects the film to make. In 83% of cases, this guess turns out to be within $10m of the total. Meaney, to all intents and purposes, has an algorithm that judges the value—or at least the earning power—of art.

To explain how, he shows me a two-dimensional representation: a grid in which each column is an input, each row a film. "Curiously," Meaney says, "if we block this column…" With one hand, he obliterates the input labelled "star", casually rendering everyone from Clooney to Cruise, Damon to De Niro, an irrelevancy. "In almost every case, it makes no difference to the money column."

"For me that’s interesting. The first time I saw that I said to the mathematician, ‘You’ve got to change your program—this is wrong.’ He said, ‘I couldn’t care less—it’s the numbers.’" There are four exceptions to his rules. If you hire Will Smith, Brad Pitt or Johnny Depp, you seem to make a return. The fourth? As far as Epagogix can tell, there is an actress, one of the biggest names in the business, who is actually a negative influence on a film. "It’s very sad for her," he says. But hers is a name he cannot reveal. 

F YOU TAKE the Underground north from Meaney’s office, you will pass beneath the housing estates of south London. Thousands of times every second, above your head, someone will search for something on Google. It will be an algorithm that determines what they see; an algorithm that is their gatekeeper to the internet. It will be another algorithm that determines what adverts accompany the search—gatekeeping does not pay for itself.

Algorithms decide what we are recommended on Amazon, what films we are offered on Netflix. Sometimes, newspapers warn us of their creeping, insidious influence; they are the mysterious sciencey bit of the internet that makes us feel websites are stalking us—the software that looks at the e-mail you receive and tells the Facebook page you look at that, say, Pizza Hut should be the ad it shows you. Some of those newspaper warnings themselves come from algorithms. Crude programs already trawl news pages, summarise the results, and produce their own article, by-lined, in the case of Forbes magazine, "By Narrative Science".

Others produce their own genuine news. On February 1st, the Los Angeles Times website ran an article that began "A shallow magnitude 3.2 earthquake was reported Friday morning." The piece was written at a time when quite possibly every reporter was asleep. But it was grammatical, coherent, and did what any human reporter writing a formulaic article about a small earthquake would do: it went to the US Geological Survey website, put the relevant numbers in a boilerplate article, and hit send. In this case, however, the donkey work was done by an algorithm.

But it is not all new. It is also an algorithm that determines something as old-fashioned as the route a train takes through the Underground network—even which train you yourself take. An algorithm, at its most basic, is not a mysterious sciencey bit at all; it is simply a decision-making process. It is a flow chart, a computer program that can stretch to pages of code or is as simple as "If x is greater than y, then choose z".

What has changed is what algorithms are doing. The first algorithm was created in the ninth century by the Arabic scholar Al Khwarizami—from whose name the word is a corruption. Ever since, they have been mechanistic, rational procedures that interact with mechanistic, rational systems. Today, though, they are beginning to interact with humans. The advantage is obvious. Drawing in more data than any human ever could, they spot correlations that no human would. The drawbacks are only slowly becoming apparent.

Continue your journey into central London, and the estates give way to terraced houses divided into flats. Every year these streets inhale thousands of young professional singles. In the years to come, they will be gently exhaled: gaining partners and babies and dogs, they will migrate to the suburbs. But before that happens, they go to dinner parties and browse dating websites in search of that spark—the indefinable chemistry that tells them they have found The One.

And here again they run into an algorithm. The leading dating sites use mathematical formulae and computations to sort their users’ profiles into pairs, and let the magic take its probabilistically predicted course.

Not long after crossing the river, your train will pass the server farms of the Square Mile—banks of computers sited close to the fibre-optic cables, giving tiny headstarts on trades. Within are stored secret lines of code worth billions of pounds. A decade ago computer trading was an oddity; today a third of all deals in the City of London are executed automatically by algorithms, and in New York the figure is over half. Maybe, these codes tell you, if fewer people buy bananas at the same time as more buy gas, you should sell steel. No matter if you don’t know why; sell sell sell. In nanoseconds a trade is made, in milliseconds the market moves. And, when it all goes wrong, it goes wrong faster than it takes a human trader to turn his or her head to look at the unexpectedly red numbers on the screen.

Finally, your train will reach Old Street—next door to the City, but a very different place. This is a part of town where every office seems to have a pool table, every corner a beanbag, every receptionist an asymmetric haircut. In one of those offices is TechHub. With its bare brick walls and website that insists on being your friend, this is the epitome of what the British government insists on calling Silicon Roundabout. After all, what America can do with valleys, surely Britain can do with traffic-flow measures.

Inside are the headquarters of Simon Williams’s company QuantumBlack. The world, Williams says, has changed in the past decade—even if not everyone has noticed. “There’s a ton more data around. There’s new ways of handling it, processing it, manipulating it, interrogating it. The tooling has changed. The speed at which it happens has changed. You’re shaping it, sculpting it, playing with it.”

QuantumBlack is, he says, a "data science" agency. In the same way as, ten years ago, companies hired digital-media agencies to make sense of e-commerce, today they need to understand data-commerce. "There’s been an alignment of stars. We’ve hit a crossover point in terms of the cost of storing and processing data versus ten years ago. Then, capturing and storing data was expensive, now it is a lot less so. It’s become economically viable to look at a shed load more data."

When he says "look at", he means analysing it with algorithms. Some may be as simple as spotting basic correlations. Some apply the same techniques used to spot patterns in the human genome, or to assign behavioural patterns to individual hedge-fund managers. But there is no doubt which of Williams’s clients is the most glamorous: Formula 1 teams. This, it is clear, is the part of the job he loves the most.

"It’s a theatre, an opera," he says. "The fun isn’t in the race, it’s in the strategy—the smallest margins win or lose races." As crucial as the driver, is when that driver goes for a pit stop, and how his car is set up. This is what QuantumBlack advises on: how much fuel you put in, what tyres to use, how often to change those tyres. "Prior to the race, we look at millions of scenarios. You’re constantly exploring."

Continued in article

Bob Jensen's threads on real science versus pseudo science ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm#Pseudo-Science



Gasp! How could an accountics scientist question such things? This is sacrilege!
Let me end my remarks with a question: Have Ball and Brown (1968)—and Beaver (1968) for that matter, if I can bring Bill Beaver into it—have we had too much influence on the research agenda to the point where other questions and methods are being overlooked?
Phil Brown of Ball and Brown Fame

"How Can We Do Better?" by Phillip R. Brown (of Ball and Brown Fame), Accounting Horizons (Forum on the State of Accounting Scholarship), December 2013 ---
http://aaajournals.org/doi/full/10.2308/acch-10365
Not Free

Philip R. Brown AM is an Honorary Professor at The University of New South Wales and Senior Honorary Research Fellow at The University of Western Australia.

I acknowledge the thoughtful comments of Sudipta Basu, who arranged and chaired this session at the 2012 American Accounting Association (AAA) Annual Meeting, Washington, DC.

The video presentation can be accessed by clicking the link in Appendix A.

Corresponding author: Philip R. Brown AM. Email:

When Sudipta Basu asked me whether I would join this panel, he was kind enough to share with me the proposal he put to the conference organizers. As background to his proposal, Sudipta had written:

Analytical and empirical researchers generate numerous results about accounting, as do logicians reasoning from conceptual frameworks. However, there are few definitive tests that permit us to negate propositions about good accounting.

This panel aims to identify a few “most wrong” beliefs held by accounting experts—academics, regulators, practitioners—where a “most wrong” belief is one that is widespread and fundamentally misguided about practices and users in any accounting domain.

While Sudipta's proposal resonated with me, I did wonder why he asked me to join the panel, and whether I am seen these days as just another “grumpy old man.” Yes, I am no doubt among the oldest here today, but grumpy? You can make your own mind on that, after you have read what I have to say.

This essay begins with several gripes about editors, reviewers, and authors, along with suggestions for improving the publication process for all concerned. The next section contains observations on financial accounting standard setting. The essay concludes with a discussion of research myopia, namely, the unfortunate tendency of researchers to confine their work to familiar territory, much like the drunk who searches for his keys under the street light because “that is where the light is.”



 
ON EDITORS AND REVIEWERS, AND AUTHORS

I have never been a regular editor, although I have chaired a journal's board of management and been a guest editor, and I appointed Ray Ball to his first editorship (Ray was the inaugural editor of the Australian Journal of Management). I have, however, reviewed many submissions for a whole raft of journals, and written literally hundreds of papers, some of which have been published. As I reflect on my involvement in the publications process over more than 50 years, I do have a few suggestions on how we can do things better. In the spirit of this panel session, I have put my suggestions in the form of gripes about editors, reviewers, and authors.

One-eyed editors—and reviewers—who define the subject matter as outside their journal's interests are my first gripe; and of course I except journals with a mission that is stated clearly and in unequivocal terms for all to see. The best editors and the best reviewers are those who are open-minded who avoid prejudging submissions by reference to some particular set of questions or modes of thinking that have become popular over the last five years or so. Graeme Dean, former editor of Abacus, and Nick Dopuch, former editor of the Journal of Accounting Research, are fine examples, from years gone by, of what it means to be an excellent editor.

Editors who are reluctant to entertain new ways of looking at old questions are a second gripe. Many years ago I was asked to review a paper titled “The Last Word on …” (I will not fill in the dots because the author may still be alive.) But at the time I thought, what a strange title! Can any academic reasonably believe they are about to have the last say on any important accounting issue? We academics thrive on questioning previous works, and editors and their reviewers do well when they nurture this mindset.

My third gripe concerns editors who, perhaps unwittingly, send papers to reviewers with vested interests and the reviewers do not just politely return the paper to the editor and explain their conflict of interest. A fourth concerns editors and reviewers who discourage replications: their actions signal a disciplinary immaturity. I am referring to rejecting a paper that repeats an experiment, perhaps in another country, purely because it has been done before. There can be good reasons for replicating a study, for example if the external validity of the earlier study legitimately can be questioned (perhaps different outcomes are reasonably expected in another institutional setting), or if methodological advances indicate a likely design flaw. Last, there are editors and reviewers who do not entertain papers that fail to reject the null hypothesis. If the alternative is well-reasoned and the study is sound, and they can be big “ifs,” then failure to reject the null can be informative, for it may indicate where our knowledge is deficient and more work can be done.1

It is not only editors and reviewers who test my emotional state. I do get a bit short when I review papers that fail to appreciate that the ideas they are dealing with have long yet uncited histories, sometimes in journals that are not based in North America. I am particularly unimpressed when there is an all-too-transparent and excessive citation of works by editors and potential reviewers, as if the judgments of these folks could possibly be influenced by that behavior. Other papers frustrate me when they are technically correct but demonstrate the trivial or the obvious, and fail to draw out the wider implications of their findings. Then there are authors who rely on unnecessarily coarse “control” variables which, if measured more finely, may well threaten their findings.2 Examples are dummy variables for common law/code law countries, for “high” this and “low” that, for the presence or absence of an audit/nomination/compensation committee, or the use of an industry or sector variable without saying which features of that industry or sector are likely to matter and why a binary representation is best. In a nutshell, I fear there may be altogether too many dummies in financial accounting research!

Finally, there are the International Financial Reporting Standards (IFRS) papers that fit into the category of what I describe as “before and after studies.” They focus on changes following the adoption of IFRS promulgated by the London-based International Accounting Standards Board (IASB). A major concern, and I have been guilty too, is that these papers, by and large, do not deal adequately with the dynamics of what has been for many countries a period of profound change. In particular, there is a trade-off between (1) experimental noise from including too long a “before” and “after” history, and (2) not accommodating the process of change, because the “before” and “after” periods are way too short. Neither do they appear to control convincingly for other time-related changes, such as the introduction of new accounting and auditing standards, amendments to corporations laws and stock exchange listing rules, the adoption of corporate governance codes of conduct, more stringent compliance monitoring and enforcement mechanisms, or changes in, say stock, market liquidity as a result of the introduction of new trading platforms and protocols, amalgamations among market providers, the explosion in algorithmic trading, and the increasing popularity among financial institutions of trading in “dark pools.”



 
ON FINANCIAL ACCOUNTING STANDARD SETTING

I count a number of highly experienced financial accounting standard setters among my friends and professional acquaintances, and I have great regard for the difficulties they face in what they do. Nonetheless, I do wonder


. . .

 
ON RESEARCH MYOPIA

A not uncommon belief among academics is that we have been or can be a help to accounting standard setters. We may believe we can help by saying something important about whether a new financial accounting standard, or set of standards, is an improvement. Perhaps we feel this way because we have chosen some predictive criterion and been able to demonstrate a statistically reliable association between accounting information contained in some database and outcomes that are consistent with that criterion. Ball and Brown (1968, 160) explained the choice of criterion this way: “An empirical evaluation of accounting income numbers requires agreement as to what real-world outcome constitutes an appropriate test of usefulness.” Note their reference to a requirement to agree on the test. They were referring to the choice of criterion being important to the persuasiveness of their tests, which were fundamental and related to the “usefulness” of U.S. GAAP income numbers to stock market investors 50 years ago. As time went by and the financial accounting literature grew accordingly, financial accounting researchers have looked in many directions for capital market outcomes in their quest for publishable results.

Research on IFRS can be used to illustrate my point. Those who have looked at the consequences of IFRS adoption have mostly studied outcomes they believed would interest participants in equity markets and to a less extent parties to debt contracts. Many beneficial outcomes have now been claimed,4 consistent with benefits asserted by advocates of IFRS. Examples are more comparable accounting numbers; earnings that are higher “quality” and less subject to managers' discretion; lower barriers to international capital flows; improved analysts' forecasts; deeper and more liquid equity markets; and a lower cost of capital. But the evidence is typically coarse in nature; and so often the results are inconsistent because of the different outcomes selected as tests of “usefulness,” or differences in the samples studied (time periods, countries, industries, firms, etc.) and in research methods (how models are specified and variables measured, which estimators are used, etc.). The upshot is that it can be difficult if not impossible to reconcile the many inconsistencies, and for standard setters to relate reported findings to the judgments they must make.

Despite the many largely capital market outcomes that have been studied, some observers of our efforts must be disappointed that other potentially beneficial outcomes of adopting IFRS have largely been overlooked. Among them are the wider benefits to an economy that flow from EU membership (IFRS are required),5 or access to funds provided by international agencies such as the World Bank, or less time spent by CFOs of international companies when comparing the financial performance of divisions operating in different countries and on consolidating the financial statements of foreign subsidiaries, or labor market benefits from more flexibility in the supply of professionally qualified accountants, or “better” accounting standards from pooling the skills of standard setters in different jurisdictions, or less costly and more consistent professional advice when accounting firms do not have to deal with as much cross-country variation in standards and can concentrate their high-level technical skills, or more effective compliance monitoring and enforcement as regulators share their knowledge and experience, or the usage of IFRS by “millions (of small and medium enterprises) in more than 80 countries” (Pacter 2012), or in some cases better education of tomorrow's accounting professionals.6 I am sure you could easily add to this list if you wished.

In sum, we can help standard setters, yes, but only in quite limited ways.7 Standard setting is inherently political in nature and will remain that way as long as there are winners and losers when standards change. That is one issue. Another is that the results of capital markets studies are typically too coarse to be definitive when it comes to the detailed issues that standard setters must consider. A third is that accounting standards have ramifications extending far beyond public financial markets and a much more expansive view needs to be taken before we can even hope to understand the full range of benefits (and costs) of adopting IFRS.

Let me end my remarks with a question: Have Ball and Brown (1968)—and Beaver (1968) for that matter, if I can bring Bill Beaver into it—have we had too much influence on the research agenda to the point where other questions and methods are being overlooked?

February 27, 2014 Reply from Paul Williams

Bob,
If you read that last Horizon's section provided by "thought leaders" you realize the old guys are not saying anything they could not have realized 30 years ago. That they didn't realize it then (or did but was not in their interest to say so), which led them to run journals whose singular purpose seemed to be to enable they and their cohorts to create politically correct academic reputations, is not something to ask forgiveness for at the end of your career.

Like the sinner on his deathbed asking for God's forgiveness , now is a hell of a time to suddenly get religion. If you heard these fellows speak when they were young they certainly didn't speak with voices that adumbrated any doubt that what they were doing was rigorous research and anyone doing anything else was the intellectual hoi polloi.

Oops, sorry we created an academy that all of us now regret, but, hey, we got ours. It's our mess, but now we are telling you its a mess you have to clean up. It isn't like no one was saying these things 30 years ago (you were as well as others including yours truly) and we have intimate knowledge of how we were treated by these geniuses

 



David Johnstone asked me to write a paper on the following:
"A Scrapbook on What's Wrong with the Past, Present and Future of Accountics Science"
Bob Jensen
February 19, 2014
SSRN Download:  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2398296 

Abstract

For operational convenience I define accountics science as research that features equations and/or statistical inference. Historically, there was a heated debate in the 1920s as to whether the main research journal of academic accounting, The Accounting Review (TAR) that commenced in 1926, should be an accountics journal with articles that mostly featured equations. Practitioners and teachers of college accounting won that debate.

TAR articles and accountancy doctoral dissertations prior to the 1970s seldom had equations.  For reasons summarized below, doctoral programs and TAR evolved to where in the 1990s there where having equations became virtually a necessary condition for a doctoral dissertation and acceptance of a TAR article. Qualitative normative and case method methodologies disappeared from doctoral programs.

What’s really meant by “featured equations” in doctoral programs is merely symbolic of the fact that North American accounting doctoral programs pushed out most of the accounting to make way for econometrics and statistics that are now keys to the kingdom for promotion and tenure in accounting schools ---
http://www.trinity.edu/rjensen/Theory01.htm#DoctoralPrograms

The purpose of this paper is to make a case that the accountics science monopoly of our doctoral programs and published research is seriously flawed, especially its lack of concern about replication and focus on simplified artificial worlds that differ too much from reality to creatively discover findings of greater relevance to teachers of accounting and practitioners of accounting. Accountics scientists themselves became a Cargo Cult.

Shielding Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

Common Accountics Science and Econometric Science Statistical Mistakes ---
http://www.cs.trinity.edu/~rjensen/temp/AccounticsScienceStatisticalMistakes.htm

The Cult of Statistical Significance: How Standard Error Costs Us Jobs, Justice, and Lives ---
http://www.cs.trinity.edu/~rjensen/temp/DeirdreMcCloskey/StatisticalSignificance01.htm

How Accountics Scientists Should Change: 
"Frankly, Scarlett, after I get a hit for my resume in The Accounting Review I just don't give a damn"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm
One more mission in what's left of my life will be to try to change this
http://www.cs.trinity.edu/~rjensen/temp/AccounticsDamn.htm 

What went wrong in accounting/accountics research?  ---
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

The Sad State of Accountancy Doctoral Programs That Do Not Appeal to Most Accountants ---
http://www.trinity.edu/rjensen/theory01.htm#DoctoralPrograms




Accountics is the mathematical science of values.
Charles Sprague [1887] as quoted by McMillan [1998, p. 1]
http://www.trinity.edu/rjensen/395wpTAR/Web/TAR395wp.htm#_msocom_1

574 Shields Against Validity Challenges in Plato's Cave ---
http://www.trinity.edu/rjensen/TheoryTAR.htm

What went wrong in accounting/accountics research? 
How did academic accounting research become a pseudo science?
http://www.trinity.edu/rjensen/theory01.htm#WhatWentWrong

 

Accounting ListServs and Blogs and Social/Professional Networking --- http://www.trinity.edu/rjensen/ListservRoles.htm

Bob Jensen's Home Page ---
http://www.trinity.edu/rjensen/