Chapter 17 Using the ROCK for Epistemic Network Analysis

Epistemic Network Analysis (ENA) is a convention and software housed within the larger methodological framework of Quantitative Ethnography (Shaffer, 2017). ENA was originally developed for modelling and comparing the structure of connections among various elements in a data set. In case of qualitative data (narratives), connections in the data are generated from co-occurrences of codes within segments; these co-occurrences are visualized in a network. ENA is a useful tool if one is working with (a large number of) variables in a single system and can benefit from modelling complex structures in search of patterns in the data. To read more about ENA and Quantitative Ethnography, see Shaffer (2017).

Below we offer guidelines for using the ROCK to prepare data for use in ENA. the ROCK provides help in the process of preparing and performing coding and segmentation, merging coded documents from multiple raters, and creating the qualitative data table (CSV file) necessary for making networks. the ROCK will aid you the most if you are working with continuous narratives (e.g. semi-structured interviews, from more details see below: Planning Segmentation) and performing manual coding (as opposed to automated coding, for more see NCoder).

The following is a step-by-step account of how to employ the ROCK in creating networks from your data, but these steps are not in strict order as work processes are highly dependent on the project in question. the ROCK conventions will be illustrated with a worked example. In general, the guidelines will be structured as follows: theoretical considerations presented in separate sections, instructions for use in the ROCK, and the corresponding information from our worked example under each sub-section.

17.1 Starting point

These guidelines assume that the researcher is familiar with the basic tenets of Quantitative Ethnography (QE) and ENA (although important terms will be clarified). The starting point of the guidelines also presupposes that the researcher is working with an anonymized database of raw, qualitative data that was collected in a systematic manner during a project where the research question, sub-questions, methods, and sampling have all been established. The guidelines do not provide advice on research design.

As a preliminary step, please install the required software as explained in sections 12.1 and 12.2 in Chapter 12.

Our example

Very succinctly, we were interested in modelling cognitive and behavioral patterns in patient decision-making processes regarding choice of therapy, i.e. for a certain diagnosis, what sources of information are considered, what specific decisions are made during the patient journey, and what conceptual framework the patient has concerning illness causation. We wanted to know what cognitive patterns underlie the decision to use different types of medicine (conventional and non-conventional). To read more about the research questions and design (methods, sampling, etc.), please see the methods section of Zörgő & Hernández (2018) and Zörgő et al. (2018), and the methodological considerations in Zörgő & Peters (2019).

17.2 Planning coding

17.2.1 What is a code?

One aim of QE is to localize patterns within a community of practice (culture or subculture), which may be referred to as “Discourse” (capitalization intended). To do this, the researcher gathers “discourse”, that is, data from the scrutinized community, such as transcripts from interviews or focus groups, field notes from observations, etc. “Codes” (capitalization intended) can be defined as culturally relevant and meaningful aspects of a Discourse, the elements that the researcher wishes to address in the process of analysis; these elements will constitute the nodes of the network model. Finally, “codes” are manifestations of these elements that one identifies in their data, i.e. evidence for Codes within the narratives. For a more elaborate description of the QE framework see: xxx.

17.2.2 Types of coding

As with coding qualitative data in general, there are several decisions one needs to make. Should I code with a predetermined set of codes (deductive coding) or should I allow for the codes to emerge from the narratives as I progress in analysis (inductive coding)? Both manners of coding have their advantages and disadvantages (for more details see Smith & Osborn (2008), Denzin & Lincoln (2000) and Babbie (2007)), the ROCK enables the researcher to employ one or the other, or even both.

Another consideration, with both deductive and inductive coding, is whether the set of codes should be hierarchical or not. A hierarchy would imply that some codes constitute part of other, more abstract codes, such as the parent code fruit containing the child code banana. If codes are not arranged hierarchically, they would still constitute a single analytical system in light of the research question, but cannot be conceptualized as containing one another, such as fruit, dairy, meat, grains, and vegetables. Again, the ROCK supports both hierarchical and non-hierarchical constructs.

17.2.3 How to represent codes in the ROCK

The general format for representing codes in the ROCK is placing the code name (e.g. fruit) in between two square brackets, for example: [[fruit]]. If the code name contains two or more words, we suggest using an underscore to separate them, e.g.: [[exotic_fruit]]; we also suggest keeping the code names concise but informative.

Both hierarchical and non-hierarchical inductive coding can be generated in the above format with the help of the Interface for the ROCK (the iROCK) platform (see below: Coding and Segmentation).

Both inductive and deductive hierarchical codes necessitate a greater-than sign to signal their place in the overall structure, for example: [[fruit>banana]] connotes a two-level hierarchy; [[fruit>exotic>banana]] connotes three levels.

Hierarchical and non-hierarchical deductive codes need to be specified before coding begins and listed in a file designated specifically for this. Deductive codes may be structured in several code clusters or trees (for more detail see: xxx). Non-hierarchical deductive coding should follow the above format for the ROCK codes, for example, the codes in the previous example would look like this:

[[fruit]]
[[dairy]]
[[meat]]
[[grains]]
[[vegetables]]

Hierarchical and non-hierarchical deductive codes are essentially a list of codes, in the above format, placed into a separate file, preferably with a .rock extension (see previous section: General background and introduction: .rock file format).

Our example

In our case, Discourse refers to patterns in cognition and behavior among patients using biomedicine only, and patients using non-conventional medicine to treat their illness(es). We can also consider the individuals in these two groups as all belonging to larger groups delimited by their “primary diagnosis”. In our case, we began by choosing four diagnoses (D1-4): Diabetes (D1), Musculoskeletal diseases (D2), Digestive diseases (D3), and Nervous system diseases (D4). Thus, every individual in our study belongs to a group indicating their primary diagnosis (D1-4) and their choice of therapy. Individuals can be grouped further based on other characteristics (for more on this subject, see below: Designating Attributes).

Our discourse consists of transcripts from semi-structured interviews conducted with patients belonging to one of the four types of diagnosis groups and representing different choices of therapy (for more on the latter, see below: Designating Attributes).

The Codes we were interested in encompass the three main areas of interest within the project: sources of information (epistemology), concepts of illness causation (ontology), and decisions in the patient journey (behavior). We employed both deductive and inductive coding. We coded the above three areas of interest with a predetermined set of hierarchically organized codes, on three levels of abstraction, comprising 52 low-level codes in total. The complete code tree can be accessed here: xxx. Our inductive coding only concerned illnesses. As interviewees also spoke about comorbidities during the interview, we found it important to distinguish among primary diagnosis and other, specific comorbidities the patient is referring to within the narrative.

17.3 Planning Segmentation

17.3.1 What is discourse segmentation?

Segmentation, according to QE, is the process of dividing data up into sensible structures, meaningful parts. There are different levels and modes in which one can segment narratives; these segments will be important in the creation of a network because connections are formed based on the number of code co-occurrences within the designated segments.

Following the QE framework, there are three important levels of segmentation to consider: the smallest unit of segmentation (utterance), a middle level (stanza), and a high level (unit). At this point we will address the first two of these, units will be dealt with later (see below: Creating Networks).

An utterance is the smallest entity of analysis in a narrative. This can be one sentence (e.g. a semi-structured interview’s utterances are sentences articulated by the interviewee) or more than one sentence (e.g. one remark made by one participant in a focus group). An utterance can also be one line or one entry in a field journal, for example. In any case, coding will occur at this level.

Albeit coding occurs on the level of utterances, co-occurrences are computed based on a higher level of segmentation, the stanza. A stanza is a level of discourse structure composed of one or more utterances that occur in close proximity and discuss the same topic (i.e. recent temporal context). Stanza size reflects how much content the researchers consider indicative of psychological proximity. Researchers who are only interested in tightly connected concepts may prefer shorter stanzas, however, if the research topic concerns broader, more complex issues, researchers may want to define larger stanza sizes. Stanza size crucially determines analysis results, thus the rules for segmentation should not be arbitrary and should be made transparent. In order to explore various versions of segmentation, more than one stanza-type can be employed (i.e.: multiple ways of defining stanza length and multiple identifiers). Furthermore, stanzas constitute merely one way of segmenting data on the middle level, one may want to utilize many forms of section breaks (for details see: Cognitive Interviews).

17.3.2 Continuous and discontinuous narratives

Qualitative data can come in many forms and have varying characteristics, an anthropological field journal presents us with very different text compared to a focus group or an interview, for example. Thus far ENA has mainly been used for discontinuous data, namely, teams of people performing tasks in a common virtual reality or performing virtual tasks in a shared physical reality (see: Ruis et al. (2018)). Similar to focus group situations, these studies worked with data that was supplied by several participants and on several, discrete occasions. Naturally occurring “turns of talk” among participants provide for excellent segmentation, for example, students discussing how to accomplish a common task in a chatroom (Bressler et al., 2019).

Continuous narratives are distinguished from discontinuous narratives by the lack of naturally occurring possibilities for segmentation; such text may originate from the transcript of a semi-structured interview or an audio dairy. Because there are no “turns of talk” (or they are between interviewer and interviewee and yield little contextual information), and the whole text may be intricately connected internally, demarcating stanzas becomes a challenge. Similar problems may occur with the smallest unit of analysis, especially if it is defined as “one sentence”. The verbatim transcription of spoken speech comes with inherent subjectivities; as a sentence in speech may persist across vast reaches, the transcriber makes many judgement calls in punctuation. Yet, as co-occurrences are computed based on stanza, length of utterance is not decisive in this particular case.

17.3.3 How to represent segmentation in the ROCK

Utterances are represented in the ROCK by something called an “utterance identifier” (UID). It is one line in the ROCK file (a line being defined as zero or more characters ending with a line ending). When the ROCK reads a certain file containing text and utterance identifiers, it splits each file at the line endings (with newline characters). This parsing is necessary to perform the coding of each utterance in a file and be able to work with that information later on. An example of a UID is: [[uid=73ntnx8n]], each utterance receives a unique identifier. Utterances may be grouped together with the aid of section breaks, one of which is the stanza; the ROCK uses this particular format: <<stanza-delimiter>>, where the name “stanza delimiter” may be changed according to the segmentation needs of the project.

Our example

In our project, an utterance is defined as one sentence. Each sentence constitutes one line in the ROCK; each utterance/line receives a unique identifier. Regarding the stanza, we employed the generic definition above, but we had three different raters perform segmentation autonomously based on their judgement of psychological proximity. Their three identifiers were: <<stanza-delimiter-low>>, <<stanza-delimiter-mid>>, and <<stanza-delimiter-high>>. The names indicate the level of knowledge each rater had concerning the scrutinized research topic; “low” signified a (naïve) rater not connected to the research project, only privy to the interview transcripts. “Mid” was employed by a research assistant with a significant amount of prior knowledge on research objectives and codes, while the “high” delimiter was used by the principle investigator. Thus, we had three different stanza-types for all interview transcripts in order to explore which stanza-type creates the best models.

17.4 Designating Sources and Cases

17.4.1 What is a source and how is it represented in the ROCK?

A source is a file with content to code (or coded content); it can contain the transcript of an interview or a focus group discussion, or even a list of twitter posts. Sources comprise one or more utterances from one or more participants of a study. Sources should be plain-text files and can bear any name, although they should be kept concise, as these will be displayed in the ENA interface later on. Information relevant to the study can also be displayed in the name, such as: “5-female-30s”, indicating this is the fifth interview and it is with a female participant in her 30s.

17.4.2 What is a case and how is it represented in the ROCK?

A case signifies a participant, a provider of data within a study. This can be a person, a family, an organization, or any other unit of research. In case of individual interviews, the source and the case may be identical, but it is important to distinguish between these as one source can contain data from many cases. For example, a focus group transcript constitutes a source, while the six participants connote separate cases within the source. Each case receives a unique identifier (case id, CID) and is represented in the ROCK with two square brackets and a designated name, for example: [[cid=alice]]. Naturally, in anonymized studies it is preferred to have an alias of some sort, it can even be a number.

17.5 Designating Attributes

17.5.1 What is an attribute?

Cases can be supplemented with characteristics or variables; we refer to these as “attributes” (ENA term: metadata). For each participant you may want to collect additional data, such as demographic variables, or even conduct a survey in addition to an interview, for example, and record the answers respondents provide. You may want to register aspects, such as the date the interview was conducted, the researcher who led the focus group, or the sequence audio diaries were recorded in. Attributes essentially allow you to group participants in various ways (thus creating different networks) and enable other types of analysis through flexibly changing the sets of data you want to see a network for (i.e. conditional exchangeability), for more see below: Creating Networks.

17.5.2 How to record attributes in the ROCK

There are two main ways you can record attributes for cases in your study using the ROCK. One is to place this information into the source directly. For example, you open the plain-text file of your semi-structured interview and enter the attributes above or below narrative. The other option is to create a separate .rock file containing the attributes of all participants. In either case, the format for entering attribute-related information is the following:

---
ROCK_attributes:
  -
    caseId: 1
    sex: female
    age: 50s
  -
    caseId: 2
    sex: male
    age: 30s
---

The above displays the aggregated version of recording attributes (illustrated with two cases). The list of entries begins with three dashes, followed by the attributes listed in the manner displayed, and ends with three dashes. In later phases of data preparation, the ROCK will read this information and assign it to the appropriate case.

Our example

For each interview we recorded the following attributes: interview date, interviewer ID, interviewee ID, interviewee sex, age, and level of education, diagnosis type (D1-4), specific illness, comorbidities, illness onset, time of diagnosis, and therapy choice (treatment type concerning primary diagnosis: biomedicine only, complementary use of non-conventional medicine, alternative use of non-conventional medicine). For complementary and alternative medicine (CAM) users we also registered type of CAM use, attendance in CAM-related courses, disclosure of CAM use to conventional physician and the employed CAM modalities. For users of solely biomedicine, reason for rejecting CAM was also coded (deductively).

To summarize, here is a list of terms we have discussed thus far and some examples for each term:

Term Explanation Example
Discourse Patterns within a community of practice (culture or subculture) E.g.: patterns in cognition and behavior among those using biomedicine only, and and those using non-conventional medicine to treat their illness(es)
discourse Data from the scrutinized community E.g.: transcripts of semi-structured interviews conducted with patients
Code Culturally relevant and meaningful aspects of a Discourse E.g.: sources of information, concepts of illness causation, and decisions in the patient journey
code Manifestations of these elements that one identifies in their data, i.e. evidence for Codes E.g.: hierarchical, three levels of abstraction, 52 low-level, e.g. the ROCK non-hierarchical code format, e.g.: [[exotic_fruit]] the ROCK hierarchical code format, e.g.: [[fruit>banana]]
Segmentation The process of dividing data up into sensible structures, meaningful parts
  • Utterance (e.g.: one sentence)
  • the ROCK utterance identifier (UID) format: [[uid=73ntnx8n]]
  • Stanza (e.g.: psychological proximity, recent temporal context)
  • the ROCK section break format, e.g.: <<stanza-delimiter>>
Discontinuous narratives Text containing naturally occurring “turns of talk” among participants E.g.: students discussing how to accomplish a common task in a chatroom
Continuous narratives Text lacking naturally occurring possibilities for segmentation Semi-structured interviews, audio diaries, etc.

17.6 Coding and Segmentation

17.6.1 How is coding performed with the ROCK?

Although manual coding can be performed within a qualitative data table in a spreadsheet (for more detail see: xxx), when conducting hermeneutic analysis with a high number of codes, this is unwieldly. For this reason, we developed the Interface for ROCK (iROCK), an online user platform, consisting of a file that combines HTML, CSS, and javascript to provide a rudimentary graphical user interface. Because iROCK is a standalone file, it does not need to be hosted on a server, which means that no data processing agreements are required (as per the GDPR). The iROCK interface allows raters to upload a source, a list of codes, and segmentation identifiers. Coding is performed by dragging and dropping codes upon utterances at the end of their line. Once coding is finished, the coded sources can be saved. There are a few preparatory steps you need to take before you can start coding your sources.

17.6.2 Creating code clusters or trees

Depending on your research design, the code structure, and the amount of codes you are working with, you may want to have separate code clusters or discrete, hierarchically organized trees. You may also want to assign different raters to specific code clusters/trees, or have several raters use the same code structure to perform autonomous coding (this allows for triangulation or even inter-rater reliability testing, for more on this subject see: xxx). In these instances, you end up with multiple coded versions of a source, for example, interview number 1 (cid=1) is coded by two raters (R1 and R2), so you end up with two coded versions of Case 1. In a situation where R1 and R2 are autonomously coding different sections of the whole code structure, they will need separate code trees or clusters. Thus, the preparation of code lists depends on how many ways the whole code structure is divided among raters: each code group (cluster or tree) should be listed in its own .rock file. Naturally, if your code structure is not divided up amongst raters then one, all-inclusive list suffices (even if that list is used by multiple raters).

17.6.3 Preparing sources for coding

In order to perform coding, ROCK needs to “clean” your sources, i.e. parse the plain-text files according to one sentence per line (or however utterance is defined). ROCK also needs to add UIDs to each line in order to be able to merge codes from various raters in a later phase of the process (see below: Merging Coded Sources). Hence, you will need four directories, all of which need to be retained: original-sources; cleaned-sources; sources-with-UIDs; coded-sources. Below are the steps for preparing your data for coding:

  1. Copy raw sources into “original-sources” folder (plain-text)
  2. Locate the R chunk called “# Preparing and cleaning sources” and run it. This will clean the sources and save them to the “clean-sources” directory (and convert them to .rock format).
  3. Locate the R chunk called “prepend-utterance-ids” and run it. This will load the cleaned sources, prepend the UIDs, and save them to the “sources-with-uids” directory.

You can safely repeat these steps; they will not overwrite existing files. When the files have appeared in the sources-with-UIDs folder, they are ready for coding.

17.6.4 Using the iROCK interface

The iROCK platform can be found at: https://r-packages.gitlab.io/rock/iROCK/. After you arrive to the website, you will see a ribbon at the top; by clicking on “Sources” you can upload the file you wish to code, “Codes” and “Section Breaks” can also be uploaded here. (Thus, coding and segmentation can be conducted simultaneously.) Perform coding by dragging and dropping the appropriate codes from your uploaded list to the end of each utterance. When coding is complete, download the file to your computer and place it into the “coded-sources” folder.

Our example

We divided up our code structure into three main areas that were reflected in the research question and the complete code tree as well (three high-level codes): epistemology, ontology, and behavior. The low-level codes belonging to these three parent codes were given to three different raters, each of whom specialized in one specific code tree. The three raters performed coding separately; none of their codes overlapped. There was a fourth rater who inductively coded the illnesses present in narratives. Each rater downloaded their coded files to a shared folder housed by a secure cloud storage (we use GDPR-compliant Sync, available at: https://www.sync.com/); the four raters had their own folder where they dropped every source they coded. Subsequently, the coded sources were copied to the “coded-sources” folder where all sources received a separate directory, comprising four versions of the source (three versions of coding with parent code trees and one version of inductively coded illnesses). Segmentation was performed by two of the above four researchers and one naïve rater; each used a <> matching their level of expertise (low, mid, high). Thus, in the end, we had five versions of a source: 3 coded with parent code trees (including segmentation; high-level), 1 coded with illnesses (including segmentation; mid-level), and 1 segmented by a naïve rater (low-level). Because we had so many versions of one source, neither of which was complete on its own, we needed to merge these files (see below).

17.7 Merging Coded Sources (if necessary)

If the research design and protocol call for multiple raters coding all sources, sources need to be merged into a master document. This is necessary because the ENA interface (to be used for creating the networks) will require a Comma-Separated Values (CSV) file to be uploaded, which contains all sources, attributes, utterances, codes, and segmentation, together a referred to as a “qualitative data table” with rows and columns that are ontologically consistent (Shaffer, 2017). This master document can be produced with ROCK by locating the R chunk called “# Merging sources” and running it. Provided the attributes were listed in a separate .rock file, use the R chunk called “# Reading merged sources” to create the CSV file comprising the master document with attributes added. Merging coded sources may also be required if the project is later revisited with a different set of codes by the same researcher, or if many researchers are collaborating on the same project non-synchronously.

17.8 Creating Networks

The CSV file can be uploaded to the ENA interface (available at: http://www.epistemicnetwork.org/) or can be further processed with rENA (available at: https://cran.r-project.org/web/packages/rENA/index.html). A tutorial on how to apply the QE framework and employ ENA software can be found here: http://www.epistemicnetwork.org/resources/. You may benefit from reading ENA tutorials and worked examples in preliminary phases of your research, as there are other questions that need to be addressed that may, for example, influence planning discourse segmentation in your project.

References

Babbie, E. (2007). The Practice of Social Research. Wadsworth.
Bressler, D., Bodzin, A., Eagan, B., & Tabatabai, S. (2019). Using epistemic network analysis to examine discourse and scientific practice during a collaborative game. Journal of Science Education and Technology, 28(5), 553–566.
Denzin, N., & Lincoln, Y. (2000). Handbook of Qualitative Research. Sage Publications.
Ruis, A. R., Rosser, A. A., Quandt-Walle, C., Nathwani, J. N., Shaffer, D. W., & Pugh, C. M. (2018). The hands and head of a surgeon: Modeling operative competency with multimodal epistemic network analysis. American Journal of Surgery, 216(5), 835–840.
Shaffer, D. (2017). Quantitative Ethnography.
Smith, J. A., & Osborn, M. (2008). Interpretative phenomenological analysis. In J. A. Smith (Ed.), Qualitative psychology: A practical guide to research methods (pp. 53–80). Sage Publications.
Zörgő, S., & Hernández, O. (2018). Patient Journeys of Nonintegration in Hungary: A Qualitative Study of Possible Reasons for Considering Medical Modalities as Mutually Exclusive. Integrative Cancer Therapies, 17(4), 1270–1284.
Zörgő, S., & Peters, G.-J. Y. (2019). Epistemic Network Analysis for Semi-structured Interviews and Other Continuous Narratives: Challenges and Insights. In B. Eagan, M. Misfeldt, & A. Siebert-Evenstone (Eds.), International Conference on Quantitative Ethnography (pp. 267–277). Springer. https://doi.org/10.1007/978-3-030-33232-7_23
Zörgő, S., Purebl, G., & Zana, Á. (2018). A Qualitative Study of Culturally Embedded Factors in Complementary and Alternative Medicine Use. BMC Complementary and Alternative Medicine.