Guidelines for Authors

Wil van der Aalst; Peter Fettke; Hans-Georg Fill; Ulrich Frank; Oliver Hinz; Lars Mönch

This document aims at providing authors with guidelines for writing papers. It consists of two parts. The first part reflects the results of a discussion within the editorial board of the journal. On the one hand, the discussion aimed at identifying categories of research projects that are specific with respect to research topics and research methods. On the other hand, it should analyze the requirements to be satisfied by corresponding submissions in the ideal case. The second part reflects a more pragmatic intention. It provides authors with lists of questions they should answer when submitting a paper that belongs to one of the relevant research categories.

Submission Categories: Peculiarities and Requirements

The reviewing process is of outstanding relevance for the reputation of a journal. On the one hand, it should be suited to result in elaborate and appropriate assessments of submissions. On the other hand, it should give authors valuable feedback for the continuation of their research. Various reputable journals in IS provide guidelines for authors already. While we agree with most of the recommendations made in these guidelines, we came to the conclusion that there is need for specific guidelines for BISE authors. This is mainly for two reasons. First, the spectrum of topics covered by BISE is different from many other IS journals. In addition to behaviorist studies, BISE invites submissions on topics such as system design, the creation of prototypes, or simulation studies. Therefore, guidelines need to be adapted to the peculiarities of these topics and the respective research methods. Second, the assessment of research results creates specific challenges that were not always accounted for in the past. These are related to the documentation of data and other resources, and, also, to principal epistemological problems. These challenges have gained growing attention in Information Systems – sometimes related to the use of replication studies. In recent years, there were various scandals in the social sciences and in psychology that resulted from massive failures to trace (or replicate) results that were published in high-ranked journals (e.g., Carey 2015). As a consequence, the well-known request for transparency as an important if not indispensable instrument of quality assurance in Information Systems has gained additional attention. However, it is not always clear what specific requirements should be accounted for in order to satisfy this request.

The following guidelines aim to support authors and to promote the quality of papers and their reuse in subsequent studies. They are not intended to create a bureaucratic hurdle that might discourage authors or bother reviewers. There are cases where data disclosure is not possible. It might also be prohibitively expensive to provide the entire computational infrastructure used for experiments or the complete environment, for which a particular software artefact was developed. In the end, the scientific community depends on trust. Trust, however, needs to be protected by promoting a high degree of transparency – not total transparency. The first part of these guidelines gives an account of the different streams of research and the specific documentation requirements they create. In the second part, these general considerations are mapped to pragmatic checklists.

Prepare for Transparency

Any scientific knowledge claim needs to be supplemented with convincing evidence that enables others to understand and to challenge it. In general, that implies a clear description of the subject of the investigation, the knowledge claim, and the justification of the claim. Usually, justification will be aimed at truth. While BISE appreciates research on formal systems that focus on proofs, in most cases studies aim at material concepts of truth, such as the correspondence concept of truth (empirical testing), the coherence concept (in line with existing knowledge in the literature), or the consensus concept (discursive evaluation through experts). In addition to empirical studies, transparency in Information System is an issue also with the design and implementation of software artefacts. Furthermore, transparency of data may also be relevant for the development and evaluation of formal methods: if the adequacy or superiority of a formal method is tested by applying it to empirical data sets, the availability of these data sets is crucial for comparing a method against alternative approaches.

Empirical Studies

Empirical studies can be divided in those that correspond to a behaviorist approach, such as field studies and experiments, and those that follow a more hermeneutic approach, such as case studies in general, or action research in particular.

Behaviorist Studies

In addition to a clear description of the theoretical model (operationalization of hypotheses, discussion of construct validity and reliability, etc.) it is required to describe how data was collected and to provide additional information on the subjects represented by the data, such as region, demographic characteristics, etc. Furthermore, the time period when the data were collected needs to be documented. Data sets should be provided as additional files using a format that allows for convenient access, e.g., XML.
If IT artefacts are an essential part of a study, either as a primary subject or as an important analysis instrument, they have to be specified sufficiently (hardware, operating system, application system, tool … to allow for reconstructing the setting. If research prototypes of software were involved in the study, the particular version that was used should be provided by a valid URL. If the system requires a specific environment that cannot be made available, authors should justify this in the cover letter.
However, all these measures are of little value only, if one essential prerequisite of empirical studies was not accounted for: the study has to be based on a theory (in a positivist sense, i.e., that at least allows for weak prediction) (see Olbrich et al. 2014). If that is not the case, data collection delivers a picture of a moving target only. Hence, if a replication of such a study produced a different outcome that would not come as a surprise – and, hence, could hardly serve as a convincing falsification.

Hermeneutic Studies

Hermeneutic studies focus on rich descriptions of cases, researchers were, more or less actively, involved with. Usually, they do not aim at collecting representative data. Instead, they may include subjective accounts of social settings that aim at comprehension, empathy, and sense-making. While those narrative elements may help with gaining new insights, their documentation demands for specific care. It is especially important that the case or project that is subject of a hermeneutic study is described thoroughly with respect to its peculiarities. That includes demographic data of actors involved in the study as well as an elaborate description of the organization and/or the relevant context in general. If there are good reasons for not including all this information in the paper itself, it should be provided in separate files.

Computational Studies of Optimization Techniques

These studies deal with assessing the performance of decision-making approaches in business information systems, i.e. exact or heuristic algorithms. The experimental conditions of the evaluation have to be described. This requires that the problem instances of the study are specified. The hard- and software environment and the available computing time have to be precisely described. Whenever possible, existing problem instances from the literature should be reused to ensure comparability. When novel problem instances are required, their generation scheme should be included. If possible, the ins¬tances should be made publicly available. To foster transparency, authors should report on the availability of data, i.e. problem instances and results, in the submission. Detailed compu¬ta¬tional results are welcome, they can be published as supplementary material if space limitations are a con¬cern. In the case of computationally hard problems, authors are encouraged to provide algorithms and ap¬ply them to small-sized problem instances to have an indication for a correct implementation of heuristics.

Simulation Studies

The purpose of such studies is to assess the performance of business information systems and related decision-making approaches in a dynamic and eventually stochastic environment. The experi¬mental conditions of the simulation study at hand, for instances the modeled sources of uncertainty, have to be reported. If possible, simulation models already described in the literature should be reused to ensure comparability. Authors should make new models publicly available. Since most simulation models are in the format of proprietary simulation tools, it is desirable to use an intermediate, tool-independent format.

Creation of Prototypes

In general, a software prototype needs to be made available publically, preferably as a supplement to a research paper for ensuring long-term availability. Moreover, the implementation should be fully consistent with the paper. Ideally, the reader should be able to conduct a hands-on evaluation of the prototype, redo any experiments using the software and/or re-implement the approach. In addition, the environment (platform, libraries, middleware …) needs to be specified to an extent that is required for running the software. This can be facilitated by providing elaborate conceptual models, algorithms, specifications, etc. If these representations are too extensive, they should be provided with the submission of a paper in additional files and the authors should acknowledge the essential details left out (e.g., additional parameters). Authors should enable reviewers to test the prototype. If this is impossible, authors should explain the reasons in the cover letter.

Guiding Questions

The following guiding questions serve to identify and properly address potential issues with the documentation and traceability of scientific studies. Authors are requested to select the category (or the categories) their paper belongs to and provide answers to the respective questions in the cover letter.

Behaviourist Studies

  • What are the data sets used?
  • Are these publically available (provide URL)?
  • If not, do reviewers have access to it and if so how?
  • If not, describe the reasons for not being able to share the data with the reviewers (e.g., NDA)?
  • What are the software tools used?
  • Are these publically available (provide URL)?
  • If not, do reviewers have access to it?
  • If not, describe the reasons for not being able to share the tools with the reviewers (e.g., NDA)?

Hermeneutic Studies

  • Are the organizations the case study/studies took place in, identified in the paper?
  • If not, could you provide the reviewers with this information?
  • If not, describe the reasons for not being able to share this information with the reviewers.
  • Can you make all the transcripts and other documents the research is based on available to readers?
  • If not, can you make all the transcripts and other documents the research is based on available to reviewers?
  • If not, what are the reasons for this restriction?

Computational Studies of Optimization Techniques

  • Is the generation scheme for novel problem instances included?
  • Are the novel instances made publicly available?
  • Are exact algorithms applied to small-sized instances of computationally hard problems to have an indication for a correct implementation?
  • Are all relevant aspects of the experiments specified in the paper?
  • Are all relevant components of the hardware and software environment and the available computing time precisely described in the paper?
  • Are there any detailed results of your study that could be published as supplementary material?

Simulation Studies

  • Are all relevant aspects of the simulation specified in the paper?
  • Are the models you use publically available?
  • If so, what are the data formats you support to represent the simulation models?
  • If not, what are the reasons for not making them available?

Creation of Prototype

  • Is the prototype available to the reader?
  • If not, can you make it available to reviewers?
  • If not, what are the reasons for this restriction?
  • Is the environment, the prototype runs on, sufficiently specified?
  • Does the description of the prototype include conceptual models?
  • If not, what are the reasons for not including models?


Bichler, M.; Frank, U.; Avison, D.; Malaurent, J.; Fettke, P.; Hovorka, D.; Krämer, J.; Schnurr, D.; Mueller, B.; Suhl, L.; Thalheim, B. (2016): Theories in Business and Information Systems Engineering, Business & Information Systems Engineering: Vol. 58: Iss. 4, pp. 291-319
Carey, B. (2015). Many Psychology Findings Not as Strong as Claimed, Study Says, New York Times. Retrieved from
Dennis, A. R., & Valacich, J. S. (2014). A Replication Manifesto. AIS Transactions on Replication Research, 1(1), 1-4
Olbrich, S.; Frank, U.; Gregor, S.; Niederman, F.; Rowe, F. (2014): On the Merits and Limits of Replication and Negation for IS Research. In: AIS Transactions on Replication Research: Vol. 3 , Article 1, 2017