Presentation at ESTS 2015

I recently attended the European Society for Textual Scholarship’s 2015 conference held at De Montfort University in Leicester, UK. At this conference, I gave a presentation entitled Beyond Google Search: Editions as Dynamic Sites of Interaction. The focus of the presentation was a discussion around some of the common UI tropes and metaphors we rely upon in Digital Scholarly Editions and an examination of how these elements are applied.  The presentation consisted of a discussion around the subject of interaction design, a break down of the common tropes & metaphors along with a comparison of 14 different scholarly editions and which of the metaphors were utilised, and a brief case study involving the Letters of 1916, a project at Maynooth University with which I have had the pleasure to be involved.

While my plan is turn this presentation into a paper for the ESTS 2015 Variant, I have had some requests for presentation slides, as a few people were interested in the content I have presented. As such, I’ve included a link to a google slides version of the presentation which can be found here.

This presentation just begins to scratch the surface of my research, and I am more than happy to discuss any questions or comments you may have.  Please feel free to utilise the contact form on this blog to get in touch with me.

Happy reading!

Modeling Maynooth Castle Part 7 – The Final Product

The castle is finally complete! Since the last post, I’ve added some trees to the scene and cleaned up the last of the materials.  Unfortunately, exporting the scene into sketch fab is not quite as successful as the rendered version in the 3DS Max.  For some reason, the materials used on the roofs of the various buildings do not export and neither do the materials for the trees.  However, I think its important to be able to move the model around, so I’ve decided to include the sketch fab version, even though it only looks partially complete.

The Real Final Product

The real product looks pretty good when rendered in 3DS Max.  I’ve included a number of screenshots from the rendered version below.  Hopefully, these will give you an idea of what the final product looks like in 3DS Max.

Front of Castle

Front of Castle

East Side of Castle

East Side of Castle

Back of castle. Note shadows from sunlight (which is behind castle)

Back of castle. Note shadows from sunlight (which is behind castle)

West side of castle

West side of castle

The Sketch Fab Version

Below is the version that exported to sketch fab. While this is the more “interactive” model, sadly the export process from 3DS Max doesn’t seem to play well with textures that are embedded in 3DS Max. So you can see that both the trees and the roofs of the buildings (both of which are 3DS Max built-in materials as opposed to custom materials) don’t export into sketch fab. This is unfortunately because it doesn’t give the full picture of the model, but this version does give you something to play around with.

UPDATE

I tried adding custom materials to the roofs and the foundation of the keep outset building in order to get those to render. While I don’t like the final output in 3DS Max as much as the original (and thus for the sake of the project, I’m keeping the original in tact), it does look a little bit better in sketch fab since those areas now have exported materials. I’ve included this new sketch fab version below.

Perils of Project Planning

For my contribution to the Woodman Diary, which is the project we are creating for Digital Scholarly Editing, I took on the role of Project Manager.  I thought I would take a few moments to discuss something that is often discussed but overlooked in any software project:  project planning.

What is Project Planning?

Project Planning, as defined by Rouse, is “a discipline for stating how to complete a project within a certain timeframe, usually with defined stages, and with designated resources.” [1] The three components—scope, schedule, and resources—mentioned by Rouse are often referred to as the “Scope Triangle” or “Triple Constraint”. The notion of the “Scope Triangle” dictates that the scope, schedule and resources form three sides of a triangle.  In the “Scope Triangle”, one of these resources is always fixed, and the other two, while flexible, move in proportion to the other.[2] For example, if schedule is fixed—meaning the delivery date of the project cannot be changed—then if additional scope (new features) are added, more resources (also sometimes referred to as budget) must be added to accommodate the increase in scope. The “Scope Triangle” is used to ensure both the stability and the overall quality the project and its deliverables.  After all, one cannot logically assume that a project, which was originally stated to take x number of months with y number of features given a budget of z, can still launch at the same time if the budget is suddenly reduced or if new features are added.

Consider this analogy. You decide to build a new home, and so you hire a company to do the work.  You agree with the company that they will build a 1,000 square foot home in 6 months for €100,000. Three months into the project, you decide 1,000 square feet isn’t big enough, and you wish to add another 500 square feet to the home.  Certainly you would expect it to cost more—and quite possibly take longer—than what was originally agreed to. However, for some reason, this notion often flies out the window with regard to software projects. Thus project managers are often brought in to ensure the “Scope Triangle” is adhered to, and the project remains on track with a high level of quality.

Perils & Pitfalls

Most think of project planning as creating Gantt charts and putting dates on deliverables. And while that is certainly a component, it is far from the only aspect. Below, I’ve listed some of the most common mistakes that can be made in regards to project planning:

  1. Thinking Too Small – project managers need to think big, and I don’t mean in regard to scope.  The biggest mistake that can happen while project planning is not considering all of the possible avenues. What if we lose some of our resources due to illness or vacations? What if the server blows up, and we need to buy a new one? What if some feature we really like isn’t technically feasible? All possible avenues need to be explored during the planning phase.  There is no scenario too far-fetched.
  2. Making Assumptions – often, we make assumptions about the projects we are working on. “The computer centre will set up that server for us.”  “That feature is very easy to implement—I’ve seen it done before.” “That software is easy to customise.” The list of examples is endless. But what if the computer centre is unable to set up the server due to their own time or resource constraints? What is the software isn’t so easy to customise or is restricted due to licensing constraints? What if that feature seen elsewhere took months to build and isn’t distributed and thus must be recreated? All of these items can have a significant impact on a project and cause it to derail.  Therefore, it is important to identify assumptions early on and plan accordingly.  Making assumptions is not necessarily a bad thing, but failing to identify them is a major problem.  If they aren’t identified, then contingency plans cannot be created.
  3. Failing to Identify Risks – every project has risks.  Some are obvious: loss of resources due to illness, scope creep (the subtle addition of new features that, while individually seem small, cumulatively add considerable scope to a project), scheduling constraints, etc.  Every project, however, has risks that are unique to the project itself.  For example, while planning for the Woodman Diary, we identified a risk regarding our software implementation.  At the time, we had yet to choose a software package for the diary, so there was a risk that the package we chose could have an impact on our schedule, as it could potentially be more difficult to implement that we assumed (also, for further emphasis, see above item regarding assumptions). Identifying risks early on allows the team to research mitigation tactics.  In fact, not only should every risk be documented, but a mitigation plan should also be created for each risk in order to identify how likely the risk is, what its impact on the project overall could be, and how the risk will be mitigated. By doing so, the team reduces the potential number of surprises that could arise during implementation.  The fewer surprises, the smoother the implementation.
  4. Forgetting the Goal – every software project has a sense of excitement about it.  The team is creating something new and many participants want to innovate or make something that has that “wow” factor.  Thus, it’s easy to get caught up in the “glitz and glamour” and forget about the goal. Whenever the team is considering adding a new feature or changing an already defined feature, the first question that should be asked is: does this change bring us closer to accomplishing the goal of the project? If the answer is “no”, then the feature should be scrapped.  It doesn’t matter how “neat” the feature might be; if it doesn’t serve the goal of the project, the feature is ultimately a distraction.  Of course, if the team answers that question with “What is the goal?”, then a much bigger problem exists.  Before project planning even begins, a goal must be clearly set out and communicated to—and agreed on by—the team.

Conclusion

Project planning is a vital process of any endeavour, especially when creating or implementing software (and ultimately, every digital scholarly edition is, at its heart, a software project).  It should never be ignored, lest the project fall to chaos and disarray. That said, it is important to remember that it is about more than just marking down due dates next to features and holding the project team to a schedule.  Project planning is also about seeing the big picture and knowing how to respond to various situations that may arise that were unexpected.  Project planning is much like warfare—considering all the various angles and developing strategies for dealing with the enemy. However, in the case of project planning, the enemy is often ourselves and our own failures to look ahead.

References

[1] Rouse, Margaret. “What is project planning?“. Whatis.com. March 2007. Web. 19 April 2015.
[2] Jenkins, Nick. “A Project Management Primer: Basic Principles – Scope Triangle“. ProjectSmart.co.uk. n.d. Web. 19 April 2015.

Further Reading

Haughey, Duncan. “Project Planning: A Step-by-Step Guide”. ProjectSmart.co.uk. n.d. Web.
Kerzner, Harold R. Project Management: A Systems Approach to Planning, Scheduling, and Controlling, 11th Edition. Hoboken, NJ: Wiley & Sons. 2013. Print.
Project Management Institute. “The Project Management Office: Aligning Strategy and Implementation“. PMI.org. April 2014. Web.
– – -. “The Value of Project Management. PMI.org. 2010. Web.
Sylvie, George, Jan LeBlanc Wicks, C. Ann Hollifield, Stephen Lacy, and Ardyth Broadrick Sohn. Media Management: A Casebook Approach. New York, NY: Taylor & Francis. 2009. Print.

Explorations in Photogrammetry – Part 4

For part 4 of my series on photogrammetry, I will discuss the creation of a 3D model of the sepulchre, the object I’ve chosen for the second part of my 3D recording assignment (for more information regarding the sepulchre and the process of taking photos of the sepulchre itself, see Explorations in Photogrammetry – Part 3). As with the bowl from the National Museum of Ireland (see Part 1 and Part 2 of Explorations in Photogrammetry), the construction of the object utilised Agisoft’s Photoscan Pro and the use of Adobe Photoshop.

Creating the 3D Model

The process of editing the pictures in photoshop was no different than it was for the bowl from the National Museum. As I covered this in Part 2 of my blog series, I will simply refer you to that post for more information regarding the Photoshop process.

The creation of the 3D model in Photoscan was somewhat different this time.  The initial process was still the same as it was for the bowl.  I imported the 178 photos into Photoscan and used the programme’s image quality assessment tool to determine the quality of the images.  To my surprise, I found the quality of the images to be considerably higher:

Number of Images Quality % of Total
4 .50 – .59 2%
174 .60 and above 98%

However, I knew from my experiences while taking the photos that there would be some photos that would need to be manually filtered out.  These were primarily photos were the sun was at the wrong angle and caused bright spots to appear in the photo or where the the photos were so bright as to look overexposed.  This led to a further 37 images being filtered out of the result set.

I then began to apply a mask to each photo.  However, this process proved to be much more involved than with the bowl. When applying the mask to the photos of the bowl,  I was able to use the magic wand tool to select the background (which was a uniform colour), and with a few simple clicks—and a reasonably high success rate—filter out everything but the bowl object itself. The photos of the sepulchre proved much more difficult.

Due to the lack of uniformity in the background (which contained grass, trees, and other objects in the cemetery), I was unable to utilise the magic wand tool. Thus, I found I had to draw the mask around each view of the sepulchre in every picture individually, using the lasso tool. This was not only time consuming but oftentimes difficult to do precisely, as objects from the background would occasionally blend into the sepulchre, making it difficult to determine where the sepulchre began and other objects ended.

Once I finished the masking, I then began to run the images through the standard processing workflow in Photoscan Pro. For some reason, however, these images were taking much longer to process. While aligning the photos took about an hour and a half (which was to be expected), building the dense point cloud proved to be a challenge. I initially kicked off this process on my Sony Vaio. This is a relatively powerful laptop with 8GB of RAM, a dedicated video card, and an Intel i7 1.80GHz chip. After running for almost 40 hours and being only 40% complete with the creation of the point cloud, I decided to cancel the operation and switch to my MacBook Pro, which has 16GB of RAM and a 2.3GHz Intel i7 chip. As of the writing of this post, the process to build the dense point cloud has been running for 17 hours and is approximately 50% complete.

When I saw the amount of time it was taking to process this model, I decided to try to utilise another method. Rather than applying a mask to every photo and then building the dense point cloud based on this mask, I decided to simply crop out areas of the photo that simply didn’t belong (such as a few photos where I managed to capture my book bag or another person visiting the cemetery). Then I simply aligned the photos utilising Photoscan Pro’s built-in alignment tool. From there I manually cut out the extraneous information the software pulled in and created a dense point cloud. I then continued to manually remove the points I did not wish the application to include in the final model (such as some of the surrounding objects). From there I was able to build a mesh and provide textures to create a model. This method was much less time consuming as I didn’t need to apply the individual mask to each photo. The time to process all of the photos to build the dense point cloud was about 20 hours—still a very intensive process but much better than the time it took to process the photos utilising the mask.

Assessment

Overall, I am very happy with the model itself.  It turned out rather well given the lighting conditions and difficulties I had with background objects and processing.  As learning outcomes, I would make note of the following:

  1. Try to take the pictures on an overcast day.  While this is not always within your realm of control, pictures taken when the sun is hidden behind a cloud tend to be easier to process, especially since they do not have shadows.
  2. Consider your environment.  One thing I did not take into consideration was the surrounding environment.  Had I thought of it, I might have taken some old sheets to drape over some of the surrounding objects.  This would have made applying the masks easier.
  3. Don’t always rely on the creation of masks.  For this object, I found the model that did not rely on masks but rather required me to manually edit the point cloud to be much easier to create. With this type of object, given the background items, I highly recommend this approach.

Final Model

As I mentioned earlier, the model utilising the mask approach is still processing. Once it finishes, I may post it here as an update. However, the model that I created without the use of masks turned out rather well. You can view it below.

Model Created Without the Use of Masks

Coming Up Next…

In my final post, I will discuss the process of 3D printing. I will evaluate available 3D printing services and discuss my chosen selection.