Methodology & Experience
Methodology, Use of Technology, and AI
The methodology for this project evolved significantly from my initial plan. At the proposal stage, I anticipated focusing primarily on parliamentary procedures in the House of Commons, particularly how they related to the points system. However, during my research, two challenges emerged. First, despite using Hansard and LiPaD, I found it surprisingly difficult to locate extensive debates on the topic. While there were occasional mentions, they were too brief and infrequent to form the foundation of my analysis or meet the requirements of the course. Second, in the course of searching for parliamentary sources, I discovered a breadth of detailed policy materials from Pier 21 and the Government of Canada. These proved far more substantive and relevant. So, I shifted my focus from a debate-centered approach to a policy-oriented one, incorporating House of Commons debates where they were most useful.
The final exhibition was built as an online digital project using Omeka, integrating text, scanned documents, and interactive timelines. Following Cohen and Rosenzweig’s guidance, I employed a database-driven structure that allows for the systematic storage and presentation of content, such as Hansard references or newspaper clippings, organized for user exploration.[1]
In building the project, I used a range of standard digital research tools: search engines (Google and specialized archives), academic databases (e.g., JSTOR), the Omni library catalogue, and word processors. I also prioritized usability and accessibility in the exhibit’s design, clear navigation, readable text, and intuitive structure, while maintaining depth of content.
While my original plan included the use of AI tools such as ChatGPT and Microsoft Copilot, I ultimately chose not to incorporate them into the research or writing process. For grammar and spelling, I relied on a Grammarly browser extension, which proved more efficient than transferring text between platforms, especially given the time demands of entering data into Omeka and Dublin Core. In preliminary testing, I also found ChatGPT to be unreliable for certain factual details, for example, providing incorrect dates, misinterpreting policies, and even incorrectly identifying Mark Carney as the current Prime Minister. These issues led me to rely on more traditional research methods to ensure accuracy.
Experience with Omeka
I chose Omeka for this project because it is widely used in the digital humanities for creating scholarly yet publicly accessible exhibits. Its flexibility in integrating text, images, multimedia, and metadata made it a strong fit for my goals, those being to present policy documents, parliamentary excerpts, and historical context in an engaging, organized way. I also appreciated that Omeka is specifically designed for building collections and exhibits, allowing me to create a resource that could be navigated by theme, chronology, or topic, something a static document or traditional website would not have achieved as effectively. My experience with the platform also gave me a basic familiarity, though I knew this project would push me to develop more advanced skills.
My experience with Omeka in this project was a blend of rewarding discoveries and technical challenges. Although I had used Omeka before, that earlier project was completed in a highly collaborative environment with five classmates, where tasks were divided and tasks were split and shared. It also took place over a much longer winter semester, which gave us more time to experiment and refine. In contrast, this project was an independent effort, completed under a tighter timeframe, which made every decision and problem-solving step my own responsibility.
There were many positive aspects to working with the platform. Creating multiple pages allowed me to experiment with different ways of structuring historical content for an online audience. I was able to integrate text, images, and multimedia elements into a cohesive presentation, which not only made the exhibit more engaging but also deepened my understanding of how historical narratives can be communicated beyond traditional essays. Viewing other Omeka-based exhibits in the “class page” gave me insight into different design and organizational strategies, which I could adapt to my own work. The process also required creative problem-solving, for example, finding alternative ways to present timelines when my initial technical plan did not work, pushing me to think more flexibly about digital presentation.
Of course, there were challenges. One of the most significant was my attempt to implement “Neatline” timelines. I devoted an entire day to uploading, watching numerous tutorial videos, and consulting online forums, but ultimately, I was unable to get the feature to function properly. While this was frustrating, it was also a reminder of the steep learning curves involved in certain digital tools and the importance of having contingency plans. Another challenge was the detailed entry of Dublin Core metadata for each item. This process was extremely time-consuming, but it was also one of the most valuable parts of the project, as it taught me the principles of metadata organization, standardization, and accessibility in digital collections.
Overall, working with Omeka strengthened my skills in digital curation, project organization, and user-focused presentation. I can see its potential for a wide range of historical and archival projects, whether as a teaching tool, a public history platform, or a research repository. I hope to use Omeka again in future academic or professional work, ideally with more time to explore its advanced features, such as interactive mapping and timelines, and to build on the lessons I learned during this project.
__________
[1] Cohen et al., “Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web.”