Relational Databases versus Graph Databases: Gen X versus Gen Y

When developing databases it is known that developers have choices and preferences. For some the relational model is tried and tested, for others it is outdated and confining. So what is the best choice? Where do you hedge your bets? This post will define and explore the connection between the two in order to understand the crux of this question. While data modelling is continually evolving, and therefore continually complicated, for this discussion at least, we will still to the basics.
Relational Databases:
Relational databases are not a “new thing”. While the technology narrative instills in us that the world wide web and all its glories are a very recent invention the relational database has been in existence since the 1970s. Conceived by Edgar Codd as a response to user demands this system is anything but new. This data model is built on a system of tables and the references or relationships between these tables are defined by “keys”. Therefore, to connect these tables, that is to display the relationship, a process called “joining” is required. These operations are usually considered quite server and memory intensive. Therefore, to produce the most efficient model the developer must choose to what level they will ‘normalise’ their data. This process is simply one of standardisation in which data is classified into more accurate categories. Ranging from first to fifth levels the amount of normalisation required is usually at the discretion of the developer. However, additional costs, server space and intensity all factor into this decision. While the process of normalisation can strengthen the capabilities of the database it has been argued that the need for classification can cause difficulties for non-quantitative fields of study, such as the humanities. Conversely, for simple data structures which require little probing the relational database can serve as a truly tested and reliable model.
Graph Databases:
As stated above the relational database is tried and tested to be both successful and useful for certain types of data structures, and arguably could be utilised for others if incorporated properly. However, it is possible that the success of relational databases lies on rocky foundations. A lack of real competition only serves as a highlight and praise to the strength of RDBs. However, graph bases present a new challenge to this longstanding hegemony and now that a viable alternative is on the market, a decision must be made.
It could be argued that graph databases simply build on the relationship model. Both of these databases rely on their ability to connect or “join” data in response to server demand. However, in the case of graph databases, tables and keys have been replaced by a system of nodes and edges. Essentially the nodes represent classes or entities.
These entities are “joined” by relationship records, or edges, which can be defined by type, direction or other additional attributes. Therefore, when performing the graph equivalent of a “joining” operation graph databases uses this list of edges, or predicates”, to find the connected nodes. The benefits of this nodes and edges system is the intuitive way in which it allows you to store and manipulate data. In graph databases your data is more flexible, while it should still be small and normalised to a degree the verb defined connection process allows for greater adaptability. This is particularly relevant for humanities databases as it allows expansion beyond the confines of the RDB when handling difficult data.
Conclusion:
In conclusion, if you’re data is relatively neat and quantitative based there is no need to be pressured into incorporating it into a graph database. In these scenarios RDBs have been proven as an effective model for uncomplicated data storage. However, if you’re data is becoming more complex and requires more detail it may be necessary to upgrade to the newer graph database. Not only will this database allow for greater flexibility with you data but, it will also decrease the strain on “joining” operations thanks to the subject, object and predicate basis.
Therefore, after constructing an idea of how the app should function, in line with best practice guidelines and keeping the historical narrative in mind, it was time to start app development. For the map itself I am using
support Framework7 users it is pretty daunting for a novice coder. I still face many challenges in implementing my code and achieving the features I want – content popups are my latest struggle- but the greatest lesson I have learned is to have confidence in what you do know. The application is difficult, increasingly so as the app develops and features become more technical, and while the functions do not appear to resemble anything like those on my JQuery crash course, there is little to separate them. The moral of the story when approaching a mammoth task like this: you know more than you think you do. 
creation. The first is data collation. During a recent lecture Dr Vinayak Das Gupta posited to our class that data was fact regardless of our state of knowing. Building on this, if we can accept that data is fact, and therefore non-negotiable, our emphasis must shift to the researcher who gathers this data. For any form of data collation, the researcher must set parameters to define their data by. These parameters decide which data is included and which is left behind. Coming from the humanities perspective this selection process is equal to the choice of selecting primary resources. A good researcher will aim to gather large quantities using clear and unbiased parameters in the same way that historians aim to use a wide variety of primary source material. Inherent in both disciplines is the possibility of biased selections. Therefore, when using data visualisations, as with secondary texts, it is essential to interrogate which how the data was collected and which parameters were applied.
visualisation with another placing American military spending within the context of American GDP. The new data context reveals a new side to the data in general and alters our view of military spending. Therefore, context is integral to understanding whether these visualisations are providing us with knowledge or a skewed reflection of manipulated data sets.


As any student of the humanities could probably tell you the study of their subject is never simplistic and always incorporates the theory of that discipline. As a history student I took at least three modules on interpreting history and historiography over the course of my four year degree, this doesn’t even account for the hours spent on the same subject within other modules. Therefore, the humanities is pervaded by its ambition to fully understand itself, it is a discipline which is increasingly self aware and seeks to analyse the methodology and interpretation of every community member in order to understand the practice and results. As a member of this community archaeology, digital or not, will likely always be a contributor to these discussions. Many archaeologists have sought to confirm archaeology as the epitome of transdiscipline, a study which incorporates both scientific methodology and the humanities even before the digital is applied (