I’ve just refreshed my literature review and am working on the final-final-final edit of my dissertation. I’ve also been double-checking my wordcounts, which are semi-automated. There was a slight glitch (nothing that is a game changer), so I’m making a note of the process that I finally found works. This is using MS Word and the Zotero plugin for referencing – if anyone reading this wants to reuse, please test it works with your set up before relying upon it.
A widely suggested idea is to create a separate ‘character style’ (not paragraph) called something like ‘references’ and change all citations and inline references to this style. This means they can be separately identified and counted by Word.
For the total word count, simply highlight all the text that needs to be included in the word count (ie exclude index, table of references etc) and see the total word count in Word’s display. Counting the citations and inline references that need to be deducted from the total is not so straightforward.
I initially imagined that I could just use Word’s styles pane to select-all ‘references’ (just highlight a piece of text that is styled ‘references’ and then select-all from the pane). But here’s the pothole. Although the Zotero inline references are marked up as ‘references’, they are not selected and counted. This is because as long as they are connected to Zotero, they are treated as updateable fields rather than text.
To get around this, save a separate copy of the document to work with (don’t mess up the original) and in the Zotero plugin, use the chain icon to unlink Zotero (there’s a warning that there will be no automatic updates in the document). Then follow the previous step to select ‘references’ and the correct number of words is returned to deduct from the total word count.
Yesterday, I ran through a test pack of paper to choose which one to order to print a selection of my photographs at A3 size. I’ll also keep the prints as reference for future paper orders.
Many of the canal images have a wide tonal range and include large areas of shadow – much of the canal is enclosed by trees and hedgerows. I usually print on a lustre paper as it has a larger DMAX than matt and doesn’t have the shine of gloss that seems more suitable for fashion-style images and glossy brochures. My test results weren’t surprising: the matt paper couldn’t sufficiently hold shadow details and the gloss held the shadows but seemed incongruous with the images. Included in the sample pack was a silk Baryta paper, with a slight sheen and warm white (top image above). This retained good shadow detail and its tone worked well with the subdued colours of the images.
With the second test sheet, I printed a different image for my wall.
Seeing the image on the wall (albeit only A4) made me think of putting up a small home exhibition and recording a video walk-about. Perhaps as a promotional resource for SYP.
Yesterday’s Zoom session was a general discussion about research, opened out to the group to raise discussion points. I note here areas of interest to me personally:
Methodology was mentioned at several points during the discussion. I observed that my dissertation was not so much about photographic representation but was interested to hear about another student who had started their research, focussing on images. She mentioned the reference – Visual Methodologies : An Introduction to the Interpretation of Visual Materials by Gillian Rose . I’ve downloaded a copy from the OCA library to scan through – it sounds interesting / important enough to read through even if I don’t need more source material for my dissertation.
There were a number of helpful comments on the reasons for research and what makes for a good research document. I note some of these here as reminders to take inspiration from during my rewrite:
CS needs to allow one to better express one’s practice.
Research brings meaning into a world that already has meaning. It is the brining together of pieces of meaning to make something different through the lens and voice of the researcher.
Research involves ‘a conversation with your sources’. The interaction between two minds.
As one moves on to finalise the research, it becomes more of an internal conversation between the researcher as writer and the researcher as reader. The academic voice reflecting the way the researcher’s mind works.
On a practical point, there was a discussion of how much of CS material needs to appear on the blog. Much of mine is contained within Zotero, along with notes on various sources. Ariadne explained that its form didn’t really matter, as long as it was accessible, eg as a separate pdf document and clearly sign posted.
Another useful session to keep the CS fire burning!
One drawback of Adobe’s Portfolio site builder is it’s poor slide show options – to use a lightbox the individual images need to be included on the page (either as single images or as a grid), which takes away from the clean look I was aiming for.
However, Portfolio does support a number of different embeds through iframes. After some research, I found that Google slides are a good solution without additional cost. For it to work cleanly, the embed code from Google slides needs to be edited so the Google logo and viewer controls are not displayed. This is done by adding the code &rm=minimal after the delay time set for the slides. So my embed code looks like this:
A little experimentation is required with the pixel dimensions to avoid a thin line appearing inside the frame – however, this has not been entirely successful on the Portfolio website for reasons beyond my technical understanding!
On my microsite, I am including a selection of 10 still images as a foil to the video and a space where viewers can linger on individual images. These images will also be made into physical prints. I thought about how to convey the impression of prints on the website and concluded that they should somehow be separated from the background with borders. The background on my website is black – partly to echo the traditional black and white colouring of the canal furniture (and also the buildings in the past) and partly as a suitable backdrop to the video that is included on the site
Lightroom’s print module (export to jpg) seemed an efficient way to manage this without resizing images individually in photoshop to fit on white backgrounds. It allows for multiple image to be output in one go.
One problem did take some working at – there is no specific setting for the pixel dimensions on the jpg export and it appears to default to around 900px on the long side. This is insufficient for large screens and I found it resulted in blurring of the images. I found that changing the ‘file resolution’ setting from the default 75 ppi increased the pixel dimensions for the exported file. 150 ppi gave me files of approaching 2000 px on the long side, which is enough.
This is a quick / neat solution to obtaining bordered jpgs where precise pixel dimensions are not important.
Technology and applications are continually evolving – Adobe seems to release significant application updates every 6 months or so! I thought I’d reassess my printing workflow, which has been from Photoshop. A few interesting points discovered during the afternoon:
Marrutt, the ink and paper supplier, posted a series of videos on output dpi for printing. They say this was based on discussions with printer technicians and industry experts (including Martin Evening, the multi-book Photoshop author). Until now I’ve been following advice in oldish books that there is little point in outputting at beyond 360dpi as it makes no difference to print quality. However, it seems that this was an arbitrary number, possibly from when computers and printers were slower, and best results (ie with clearest resolution) are obtain by outputting at the maximum possible dpi based on file size for the print size. The rationale being that this preserves the maximum actual camera-pixel information in the print, rather than digital approximations. A series of tests seem to prove this point. So, a change in my workflow will be to stop limiting my output dpi to 360 when I have pixels to spare.
Lightroom print module seems to have evolved since I last used it. It seems to be straightforward to create soft prints using ICC profiles and also save these versions as copy images. It seems to me a more efficient workflow doing this in LR rather than switching on/off additional layers in PS for different output formats. The PS version then remains a straight, print master. I’m swapping to LR for printing. I’ve also heard good things about LR’s AI output sharpening for prints – for my basic requirements, I think this should be more than sufficient.
Out of gamuts – this has been a source of frustration in the past, trying to fix OOG warnings in soft prints. An option is to ignore them and let the printer bring the areas back into gamut but this feels counter intuitive, given the manual work put into honing print master files. I experimented on one image that included significant shadow areas and saturated highlights as a test (the image above). I found best results by adjusting the black point to the paper by using a curves adjustment and leaving the printer to deal with the saturated highlights itself. The blacks adjustment is important on lustre paper to avoid the loss of shadow detail – there was a significant difference on my test prints; without correcting gamut the shadow areas were close to completely flat. Of course, the result will depend on the image. An advantage of printing at home is being able to make test prints and reprint if necessary. I’ll consider the OOG warnings critically and possibly ignore them depending on the specifics of the image.
Overall time well spent printing with an old image that I’m not as involved with as my current BoW – allowed for a dispassionate view! I’ll now put into practice with my BoW prints.
Having successfully sequenced, added transitions and exported a movie file from Keynote, I turned to Garageband to add sound and work around Keynote’s very limited sound capabilities.
I was already familiar with sound recording from forays into music recording but had previously used Logic Pro when Garageband was a much more basic tool. It has developed significantly over the years. After some experimentation I arrived at a method for putting sound to my movie.
The ambient sound clips I’d recorded on my iphone while walking the canal, were saved as files from the recording app on my phone and then simply dragged into GB, where they were automatically created as separate tracks. These were then cut to size and placed under the desired image frames. Automation of volume, panning and so on is possible for each track in GB – so I made adjustments to these.
To record the narrative, I used GB on my iPad – using a separate section for each verse to make it easier to do several takes without re-recording the whole narrative, and also to make it easy to place each verse separately against the movie frames. The track sections were then copy/pasted into the main GB file.
What wasn’t as easy to spot in GB was the overall sound mastering – it is perhaps a recent addition as some YouTube tutorials were suggesting bouncing the entire track down and then reimporting to master. However, there is a separate master track but it isn’t shown by default. Mastering allows the addition of overall compression, EQ, adjustment of stereo spacing, and limiting. This is important to bring the sound together in a coherent whole when building from different sources. One tip I found was to turn off the preference for automatic sound optimisation on output – this takes a cautious approach and reduces dynamic range and volume.
When done, it is easy to ‘export’ the movie with the sound added to the file with no loss of quality.
I found that this approach offers a way of creating movies from photos with simple transitions that allow the photos to remain centre stage. This was important to me – I wanted to used the movie format to show the photos to good advantage, rather than to use the photos to make a movie, in which they would be more like raw ingredients to be chopped and added to the mix. The workflow is also much quicker for me than using more complex tools like Premiere Pro.
As little as I’m a fan of genres and labels, my work sits solidly in the domain of the pyschogeographic and I suppose that is a useful indicator for anyone who understands what that term means. As I’ve started to work on a video production to show my work, I have also thought about what that medium offers and how to distinguish a video comprising of still images from a straight photographic slide show. I see this as resting in the use of sound and the conveying of a narrative. For example in the work of Laura El-Tantawy explored in a previous couple post and in the Robinson series of films by Patrick Keiller.
Feeling some creative force this morning, I drafted my own psychogeographic prose to accompany my video. I’ve asked a writer friend to review and suggest edits to avoid working on sound recording only to find there are unseen flaws in my own prose.
Yesterday, I spent time with Apple’s Keynote application to gauge its suitability for my production. I was surprisingly encouraged with what I found. It is easy to manage the placement of images on the black screen space, has a number of suitably subtle transition options, control over individual image timings can be accessed and adjusted (though it was clearly not designed with this in mind as bulk editing isn’t supported). A weakness is in its management of sound, which are not editable as separate tracks but attached as files. However, I’ve found a simple way around this by not adding sound in Keynote but exporting the video and then importing to Logic Pro (or Garageband desktop) where sound can be added and managed in separate tracks. The finished production can then be exported as video and sound. It is completely unsuitable for more complex video production but for what will essentially be a slide-show, plus it is perfectly adequate when used with a separate sound application. It’s ability to export to html could also have other uses – for example, page turning of a book for a website. Premiere Pro cannot export to html, understandably.
I concluded in my previous post that I needed to get over any deep rooted trauma about ‘slide shows’, both from childhood 35mm and many deaths by PowerPoint in a business context. This got me think about other technical solutions to production – Adobe Premiere Pro is after all over-engineered for slide show production and is aimed at professional movie making. Could I obtain what I need from a simpler tool without too much compromise and make a much quicker workflow?
I looked into a number of options.
There are a number of online applications, which I quickly discounted as I want something that can manage large photo files easily without uploading/downloading.
KeyNote – I’ve never used KeyNote but note that it has some useful output options, including as a movie format and as html that can be uploaded to websites. The latter is interesting as the output wouldn’t require hosting on Vimeo or YouTube and could be embedded directly. It also offers a record function for timing of slide movements (and voice over if needed), which seems to be adjustable manually. It’s free with Macs.
PowerPoint – One that I’ve suffered heavy trauma through tedious ppt shows. But have used extensively, if not for photos. It is essentially the Windows equivalent to KeyNote but also works on Mac. I have access included in my MS Office student membership.
Premiere Pro – it would be possible to use this in a simpler way without getting into all the fine tuning it offers; using it with a slide show in mind rather than movie from stills. This may not turn out to be an issue but Premiere Pro is a video tool so obviously does not export to html like KeyNote and PowerPoint.
For my next experiment in ‘film making’, I’m going to make slide shows. I’ll try both KeyNote and Premiere Pro. For KeyNote, I’ll also see whether exporting to html and embedding in a website is a advantage over pure video.
After discussing A3, I decided to make a ‘film’ of my photos to show them rather than an ‘interactive ebook’. My initial draft ebook was too busy / distracting in retrospect and I’ve already tried pulling it back towards something more paperbook-like. However, this felt like a pale imitation of the physical object. I may ultimately make a paper book (perhaps as part of SYP) but for now I’m working within the restraint of digital only assessment and how to show the work to the best effect.
My starting point was ‘how to make something more interesting than a slide show’. There are some interesting examples of people animating photographs using After Effects but this seems to be a major project in its own right. Oh to have the resources of U2 …
I looked into the basic technicalities of putting a simpler version of this together – separating the elements of each photo into layers, filling in holes in the background, and animating within After Effects. As well as the time required to do this for a large number of photographs, it became clear that shooting would have to be done with this end in mind – photographs would have to contain elements against clean backgrounds to allow effective animation.
A lower-tech solution is to make use of panning effects around photographs to suggest movement. The 1962 film La Jetée seems to be a reference piece in this kind of approach …
This is more than ‘showing photographs’ it is making a narrated film using photographs but I chose this as a starting inspiration for my first experiment.
Tools – I looked at and quickly dismissed a number of tools for video making, mostly because they offered very little control over the movement or aimed at providing quick output for social media content: Photoshop, Adobe Spark Video, Adobe Premiere Rush, Lightroom slideshow. I have Premiere Pro as part of my Adobe subscription and have used it before, so I went with this.
Photo edit – when making this kind of series the edit changed from my preferred photos to something that would allow continuity of movement throughout the frames. This is the first hazard, it potentially becomes something other than showing the best photographs and more about making an ‘animated’ flow work.
Adding movement – after some trial and error, it is not difficult to add movement by panning and zooming using key-points, though it does seem to be very heavy on computer resources. Premiere Pro also offers a range of ‘transitions’ to move between stills. For this kind of approach, image files need to be large so that detail is retained when zooming.
The curse of aspect ratio – similar to the ebook the aspect ratio is dictated by standard screen dimensions and making the most of the display area, so 16:9. This raises challenges in respecting the original crop of the image, versus making the most of the screen. In experiment #1, mostly tried to make use of the full screen.
It was unsuccessful because it was too much about movement and not enough about showing the photographs. This also meant a number of photos that were not my first choice were included. Lessons for the next experiment:
Stick with the best edit of the photos for the photos themselves, not for creating movement. It must be primarily about showing the best work.
Respect the crop of the photos to show them to best effect. This means finding a way of dealing with the black space in the frame around them. Leave it black or layer it with something textural / video – there will be a fine balance between creating visual interest and making distractions.
Movement quickly becomes tiring / annoying. The most successful clips were those with very little or no movement. For the next experiment I will avoid large experimental movements such as panning down an element and zoom through doors into the next frame!
In conclusion, I think I am in effect after something more akin to a slide show than a film and just need to make it as interesting as possible. Perhaps I’m still wounded through hours of sitting through slide show projections as a child and need to realise the form in a way that is contemporary with the benefit of new technology over 35mm colour slides.
New Territory Media blog – https://newterritory.media/5-ways-to-use-still-photos-in-movies-that-are-not-the-ken-burns-effect/
Learn about Film blog – https://learnaboutfilm.com/use-still-images-film/
I’ve been spending more time with my images and noticing things that require attention – either over-done or certain elements requiring further work. There is of course, a huge chunk of subjectivity in this. Unexpected, starting to use Instagram again and cropping images to output as 1:1 or 4:5 (so cropping to the main elements) has helped with seeing.
One of the challenges shooting in the generally subdued northern English light is that colours loose their intensity. Our human visual system seems to compensate for that as we give attention to elements that interest us. The camera cannot, unless light is artificially added or we return to locations again and again in the hope of brighter days.
In RGB processing mode, I’ve found that colour information is improved to a degree as the monochromatic contrast in images is adjusted. Sometimes this gives me enough, other times I would like still more colour. However, colour saturation adjustments, unless used sparingly, look too artificial and obscure texture details.
The before and after above shows the RAW negative and image after production. It was a challenging shot, as I wanted to retain some detail in the brighter shadows that could be recovered in post and didn’t want to blow the sky completely. My RAW neg was reassuringly flat, suggesting to me that there would be some balance in the information across the image to work with. This is actually the upside of flat ambient light as in bright sunshine, it would have not been possible.
After my usual post production work, I still found the colours a little disappointing. So decided to revisit the LAB mode to add more colour intensity. In this mode the lightness (L) is completely separated from the colours. The ‘a’ channel is the green-red continuum and the ‘b’ channel is the blue-yellow one (for PS colour modes – https://helpx.adobe.com/uk/photoshop/using/color-modes.html). This contrasts with RGB where the lightness is dealt with through the combined RGB channel (and therefore also affects colour).
I stamped a new visible layer in PS and duplicated it to another document, which I changed to LAB mode. By adding a curves adjustment, the colour intensity was increased without altering the contrast or saturation – it is effectively like adding light to the original scene. Extreme effects can be obtained in this way, so care is needed. However, I can see how this type of adjustment will become a regular riff in my bag of tricks for managing poor light.
In a previous post, I said that I would try out the approach of editing the ‘print proof’ in the same colour space as its intended output. The idea being that it would require little further adjustment when it came to printer output. I put this into practice with one print, using a proofing paper and three different test papers. I found that the approach didn’t work efficiently for me in practice. The main drawback was that I often output to different media (including screen and paper types) – this resulted in creating different PS proof files for the same image. This could make for unwieldily file management and complete reworking of adjustments for each colour space.
After some experimentation, I’ve settled on an approach of developing the proof image in full Adobe RGB space and adding a group of layers for each set of adjustments for different colour spaces / papers. These can be activated/deactivated as needed.
I’ve battled with the out of gamut (OOG) warnings in PS (also LR) in the past having been convinced that they always need fixing to ensure a good print (no doubt through YouTube tutorials and Adobe’s own videos). It is a process that can be tortuous when small OOG areas pop up all over an image. Common print adjustments I tend to make are bringing up the brightness (to compensate for the brightness of the screen deceiving about the brightness of the printed image) and selectively raising blacks if needed (to stop them blocking out in print). Both these things make sense and help the print quality. Another thing that can trigger an OOG warning is heavily saturated colour – the image above was flashing all over with yellow buttercups. An option that I’ve not tried before is to ignore the OOG warning – if it relates to small areas where detail is not important this seems a sensible option. The conversion on printing does its best to bring OOG elements into gamut. This is obviously a very quick way of dealing with the warnings and resulted in the best print (above) after a couple of tries at desaturating yellows in selected areas, leaving the print looking flat and too cool.
For the print file itself, I stamp a layer from the proof file layers and copy that to a fresh file for resizing and sharpening before outputting. I envisage keeping this flat file separately for anything where making consistent print re-runs is important.
I mentioned in a previous post that I’d revisited my workflow and approach to post-production. After experimenting with a few less important images for Instagram posts, I worked on one of the more important images.
Below is a before and after, showing my previous edit and the current edit, under my updated approach.
I think the main things that have made a difference are:
Separating the preparation of a digital negative from preparing a print proof (I treat a digital print to sRGB as a proof also). I read Ansel Adams’ The Print yesterday (more on that in a separate post) and one take away was to treat the negative as full of possibilities for realising the end image (much in the same way as capturing the original photo). By optimising the negative (including recovering detail and balancing) as a separate step, without considering the finished output, there is a much stronger raw material.
Have a disciplined approach to adjusting broad areas of the image using rough masks (rather than fine selections) gives more subtle contrast throughout the image.
Making fine adjustments on a single layer using the history brush seems to encourage more effective evaluation of the image (without the distraction of masking etc). In the reworked image, attention is better drawn to the chimney, where I want it.
Editing the print proof in the relevant colour space (or print profile) – in this case sRGB. By viewing the gamut warnings in Photoshop before exporting this image, it was clear where the image would become blocked up in the shadows once converted to sRGB. I was therefore able to make a levels adjustment to some shadow areas and avoid the sRGB blocking.
Now I’ve more or less done collecting images, my thoughts again turn to getting the best results from files in post production. This isn’t a significant focus in the OCA courses which, in my view, are more focused on conceptualism than formalism. I realise it is an area I revisit towards the conclusion of each course as my approach evolves and I see others working in post (mostly on the internet) and revisit books on the subject. George DeWofle’s Digital Photography Fine Print Workshop is useful to me, even if dated. He was a student of Ansel Adams and Minor White, and uses Photoshop to continue the craft of print making in the digital age. Midway through his book (p160), he observes:
The key to this process is perception, not a technical trick. If you can’t see the problem – brightness, contrast, color, softness, sharpness, or whatever it is – then no technique in the world, will make your print better.
He describes six aesthetic qualities of a digital print (there is no concern with contextualism in his book) that need to be worked on to develop the form: cropping, contrast, brightness, colour, defects, and sharpness. And discusses at length why the order of work is important and his approach to that work in Photoshop (and some other tools).
Recently, while fine tuning images (after making image-wide adjustments), I’ve felt my process a little mechanical – with adjustments and layer masks to target the adjustments, using a mouse. I’ve been feeling the pull to draw on the fine adjustments. Using the hand and pencil with years of learned control is very different to using a mouse and somehow more satisfying. DeWolfe has such an approach in his practice, which I’m going to try in my workflow as I finalise my project’s images.
Here I describe my current work flow and how I plan to adjust/refine the approach. I do this mainly for my own record and to find clarity through having to write it down.
1) Input – negative and base copy (with broad adjustments)
I use Lightroom as file library. After importing a file, my current practice is to apply a tone preset (from the ‘film types’ available for Fujifilm cameras), make basic adjustments to exposure, add input sharpening, slight curves adjustment and image straightening if needed and sometimes cropping. So, the original file is adjusted non-destructively. This creatives complications if I want to rework the image later. In future, I’ll create a Photoshop copy and adjust using the camera RAW filter.
This will create two files that I’ll call negative (ie untouched apart from ‘film’ toning) and a Photoshop base copy (broad adjustments, white balance, and cropping if needed). DeWolfe emphasises the importance of first working on contrast and light (suggesting image is viewed in black and white to see this) before working on colour/colour balance – any changes in contrast and light also affect colour tones. I don’t always do this in order and find myself sometimes working circularly as a result. So, a discipline to introduce.
2) Optimising base copy – details and balance
DeWolfe uses a separate step in his workflow to optimise the base copy and retains this version separately from the version worked on for printing; more on that in the next step.
Optimising entails recovering any lost details in dng files and balancing the image (correcting broad areas that are either too bright or dark and adjusting contrast if necessary). To recover details DeWolfe uses external applications (eg nik sharpener) but technology has moved on since the book was published in 2006.
Lightroom now has an ‘enhance details’ option (takes around two minutes to process an image on my MBP) that can be used successfully on dngs from any camera. I also have Iridient X-transformer, which I bought when I was having problems with Fuji RAW conversion in previous versions of LR. I’ve not generally used either but could do so for selected and important images that contain significant details. I currently tend to balance later in my post processing process but now realise that this makes little sense as it disrupts any fine tuning of an image. From now on, I will enhance dng details where appropriate and always balance images in advance of detailed adjustments. I’ll check the Iridient vs built in LR options.
At this point one has a ‘base-copy’ image that has had broad exposure and contrast adjustments, has possibly been cropped, and has been optimised.
3) Proof – setup, overall contrast and colour balance
Contrary to common practice, DeWolfe recommends viewing the image in the correct proof set up at this point – otherwise it needs to be colour balanced again when printing. This is a bit of a revelation to me having spent time in the past wrestling with proof copies that required another lot of work before printing; it felt like going around in circles. It also occurs to me that if intending to output an image for screen (website, video, ebook) it perhaps makes sense to proof view in sRGB; I’ve noticed that some of my own images are not quite as satisfactory once converted on export and viewed in web browser. This would however make for an expansive workflow, given the steps that follow.
Levels adjustments are suggested to adjust overall contrast, brightness and colour balance. Any colour cast is corrected in a colour balance layer Local adjustments are made in the next step of the workflow. Once completed, an evaluation of a first print should be made before proceeding.
Working in this way would be a departure for me – proofing usually comes at that end and lead to no end of frustration.
4) Proof – local contrast and colour
This is the step where I’m dissatisfied with my current layers / mask approach. DeWolfe advocates using the ‘history brush tool’ to make marks (like an artist) and ‘move forward in a positive, courageous way’. It is the tool he uses for dodging, burning (through different blend modes); painting on local adjustments from snapshots of adjustments; and outlining to separate objects in detailed images; applying local hue/saturation adjustments. I’ll experiment with the techniques suggested but am wary of the time it could take when editing a number of images.
If subsequently printing on different paper profiles, the existing proof copy could be used as a base for a new proof copy and adjusted as necessary.
5) Final preparation for printing
This involves using a fresh copy of the image since it will be be resized and flattened for printing. I’ve never flatten my images prior to printing, but from what I can see online there are advantages in speed of printing and also output sharpening. It would also leave a final print file without layers – a print version that could be reprinted consistently.
DeWolfe first cleans up the image and then saves a flat version. He addresses any noise and sharpening. He doesn’t specifically address resizing images down (it was perhaps not a thing in 2006). However, there is a sound logic to applying sharpening after an image has been resized down, since its pixel dimension will have changed and the sharpening algorithms applied differently. His final steps are edge burning and final contrast tweaks using a gradient map (if he feels necessary).
6) Recap of files in workflow
Neg – unprocessed in LR apart from application of camera ‘film’ tone.
Base copy – cropped, broad contrast and colour adjustments, optimised detail, balanced image
Proof copy – viewed using output colour profile, contrast & colour corrected, local adjustments, and cleanup
Print – flattened, denoised, sharpened, final contrast adjustments, edge burning. Permanent record of a specific print.
A fellow student commented on a previous post about outputting to Instagram – observing that it sounded like a lot of work. They are partially correct but things perhaps always look longer when written than acted out. I don’t think it’s realistic for me to apply this kind of workflow to all images, just the ones destined for fine printing and large screen viewing. I’ll see how things turn out once I’ve worked through the process. I hope to arrive at a routine workflow that fits how I personally like to work with images before the end of the course!
I have a couple of Instagram accounts and have been active on neither for months. I’m now beginning to re-engage in a considered way. The few people I talked to on the canal sometimes asked if I had an IG account where they could see the work – I replied ‘no’ as I haven’t been using the accounts.
One account I use for experimenting with iPhone photos and apps (@snappedpixel) and the other is intended for more ‘serious’ photographs (@thephotofitz). For the former, I’ll just continue to upload direct from the iPhone without being too considered about formatting etc – as the name suggests, these are instant photos. For the latter, I need a more considered approach to resizing and formatting for the images to display well on IG. I don’t want IG auto-compressing and cropping images that I’ve spent some time working on.
Here’s what IG says about photo file formatting:
When you share a photo that has a width between 320 and 1080 pixels, we keep that photo at its original resolution as long as the photo’s aspect ratio is between 1.91:1 and 4:5 (a height between 566 and 1350 pixels with a width of 1080 pixels). If the aspect ratio of your photo isn’t supported, it will be cropped to fit a supported ratio. If you share a photo at a lower resolution, we enlarge it to a width of 320 pixels. If you share a photo at a higher resolution, we size it down to a width of 1080 pixels.
Considering what might an aspect ration between 1.91:1 and 4:5 mean in practice. The former is the optimum for landscape photos. My photos are 4:3 native, but the maximum IG ratio against 4, would be approximately 4:2. So the options are to either loose 1/3 of the image’s height (a lot!) or place the image in a 4:2 (3.82:2 to be exact) frame so a border is created. The choice needs to be a creative decision that will also affect the look of the IG grid. The later doesn’t optimise visible image space but perhaps gives a more considered look – the image not the frame takes priority. For portrait images the equivalent situation (for my native 3:4) is an IG maximum of 3:3.75, so much less of a crop and more likely to be manageable through cropping on many images. Once ratios are taken care of then the photo needs to be resized to a max of 1080 pixels wide, and maximum of 1350 tall. Or in ratios: landscape – 1080:565, and portrait: 1080:1350.
It is clear that IG favours the square or portrait image over the landscape. So, if making images specifically for IG (which I don’t) landscape might be better avoided.
I’ve set myself up a private IG account as a test site – perhaps a good idea to use one anyway to test ‘serious’ photos before putting a public space. I experimenting using a single portrait image that had already been processed in PS.
Cropped in LR, resized and converted to sRGB on export. Weakest result, cropping didn’t suit image content and colours were dull on IG. The LR automatic output sharpening also didn’t seem as effective as manual sharpening in PS.
Image reworked in PS by creating a duplicate of layers (presharpening), resizing image, turning PS background white (as IG), processing image by viewing at a similar size to iPhone. Sharpening applied to resized image. Then finishing / exporting in different ways:
Without cropping – IG cropped and form of image wrecked on upload, though full space on IG grid used.
Canvas size changed to IG 4:5 and margins left around 4:3 image. Full image retained on upload but looks squashed to top / bottom margins of IG frame.
Canvas size changed to IG 4:5 and image transformed to fit with border all around. This was my preferred result by a long margin but also the one that took more work.
This exercise was useful to me. In terms of my workflow:
If an image doesn’t look right cropped to IG ratios, then put it on an IG sized background and leave a border. Unfortunately this is going to be the case for most of my landscape work.
For screen try to recreate the end display environment when working on the image (eg IG is on white and image about the size of an iPhone).
Manual output sharpening in PS after resizing is much more effective than automatic LR export sharpening. This has implications for other outputs too, where I have tended to rely on LR for resizing and sharpening image on export. I’ll revisit the images in my BoW ebook, website, and slide show.
For the past couple of days, I’ve been working on a micro website to accompany the photobook I intend to produce for my BoW. I’d like this to serve a couple of main purposes – something to point people at when discussing my work, and supplement the book with multimedia content. I’d pulled back from including multimedia in the ePub as it was bloating the file size and it was creating a busyness to the book when, in this context, I wanted something more contemplative.
Adobe Portfolio is included with the Creative Cloud subscription so has no additional cost for me. It is used as a WYSIWIG web application, rather than desktop and all files are hosted on Adobe’s servers. It comes with an Adobe domain name based on the user account but it is easy enough to point an external DNS to it so a custom domain name can be used. Importantly, in the past I’ve found it reasonably straightforward and quick to tailor one of its preset templates to the look I was aiming for, including typefaces from Adobe Fonts. It does allow a direct interface with Lightroom Cloud but I prefer to upload images resized down for web viewing. It is possible to disable right-click of images (ie some guard against unauthorised download) but downsized images seem to be the best protection.
I needed a bit of reorientation after some time away from Portfolio but in the end arrived at a format that seems to work. A plus was that Portfolio now supports its own hosting of video and sound – there is no need to embed from Vimeo, so the branding that is present in free accounts is avoided.
The content of the microsite needs further thinking and work and will inevitably be updated as my BoW progesses. However, it is straightforward to update and the main work of layout is now mostly taken care of.
I haven’t yet worked out how best to distribute a book as an ePub, which is the next thing on my research list.
I need to host a couple of videos for my BoW on a streaming service. My personal viewing of online videos is mostly around YouTube – for example if my lawnmower stops working, how do I clean the carburettor, or how do I do something technical in InDesign. I’m very much aware of Vimeo and notice that most creative content is hosted there, but have never really looked into the difference between the two platforms. As I’m about to start hosting, now is a good time.
Some good online comparisons are easily found and I settled on one from the excellent WPbeginner website (https://www.wpbeginner.com/beginners-guide/youtube-vs-vimeo/), since most of my videos will be embedded in WordPress. I decided to start working with Vimeo because:
It does not make money through advertising – YouTube (like the ‘free’ WordPress accounts) significantly disrupts communication with advertisements, over which there is no control. It makes no sense to me to spend hours refining creative output to only have ads obliterate the viewing experience.
Vimeo’s free account is limited in respect of uploads but sufficient for my needs for now. Also the first level premium account, should it be needed, is not expensive at £7 per month.
Video quality is better on Vimeo as its model focuses on quality over quantity and therefore doesn’t have to quickly process the huge volumes of video that YouTube deals with.
Vimeo’s user base is smaller but apparently more engaged creatively. The size of user base isn’t a concern for me.
Today’s task will be to get up and running on Vimeo.
Today I’ve been working on a short video using a selection of my photographs and the piano music that was in the previous edit of my ephotobook but won’t appear in the simplified next edit. I’m planning to use the video on the project website.
The last time I made a video from photos I used Premiere Pro, which seemed a little like using a food processor to beat an egg. Since, Adobe have released Premiere Rush – a mobile and desktop app aimed at production for online sharing. I took a quick look at this and again thought it over-designed for what would essentially be a slide show to music. Though it does look like a useful tool for lightweight moving image production.
At some point while browsing Adobe’s site, I noticed that the Lightroom CC slide show (I’ve never used) now has the capacity to output in mp4. After initial attempts when the application kept crashing when rendering images from the LR library, I exported resized images and then reimported them to try again with the slide show. It worked! By importing images as graphics for the beginning and end slides and asking LR to automatically match the slide show during to the music, I have a clean and simple mp4 for the project. It will also be easy to swap in and out photos if I change my mind and re-export.
Always nice to find an unexpectedly simple but effective solution. Next step is to upload the video for display on a website for the project.
UPDATE – after working with this approach, I found the process of exporting and then reimporting to LR unsatisfactory; it would end up filling my LR catalogue with duplicate images at different file sizes. Not the clutter I want.
I posted a draft of my photobook cover to the Discuss forum for comment and request for input from a graphic design perspective. No graphic design input at the time of writing this, but some willing suggestions from other students that were much appreciated (link here).
The element I was struggling with was the placement of my name – disconnected from the rest of the title and a little lost in the sky. One suggestion was to use the dark bridge, which helped with visibility but not with my feeling of disconnection.
I made an online study of photobook covers (including Dewilewis’s back catalogue – https://www.dewilewis.com/collections/back-list) and noted the following for photobooks that feature photographs on their covers:
Monochrome is more straightforward as there are more options for placing text that will stand out from the image, including the use of colour text.
Some books have images inset, which allows for a large border for placement of text. Importantly, the original aspect of the photograph can also be retained – this is a factor for my ebook cover, which is 16:9 for screen viewing, versus my native photo aspect ratio of 3:2 (or approx 16:10.6). Perhaps that is why I read these kind of covers as more photographic in form, rather than graphic design driven.
Some books have no text at all – possibly for famous photographers who’s work needs no introduction?
The covers featuring full-bleed colour photography that worked well for me were the ones where the text had been designed along with the image, creating a whole image/text. This invariably mean text placement over areas of a selected cover image that would allow the text to stand in contrast to the image. Some designs featured text that was coloured to fit with the image – a quieter effect than heavily contrasted white or red text for example. However, full-bleed is not attractive in my context of the difference in aspect ratios between screen and photo.
The movement of the eye across the page is affected by the arrangement of image and text. In western culture we are used to reading from left to right and also generally spend more time looking at text than images (since it very clearly needs to be decoded). Looking at various book covers, I notice the text is either placed centre (ie neutrally balanced) or to the right so the left-centre image has priority in reading. Where text is place to the left, unless it is lightweight, it tends to dominate the viewing and almost put a break on looking at the image.
Using these observations, I tried various new layouts and arrived at the cover below. I’ve now used a border all around to maintain the photo’s aspect ratio and placed the text to the right, vertically as this better fits the available margin space. I’ve retained the original font/colours for the heading but reversed the direction so it flows from top to bottom (taking the eye off the page to the next page). I’ve added my name under the header text so it is connected and differentiated it with a different colour (picked from the image’s sky).
I’m much happier with this but I’m sure others will have their own perspectives!
Having decided to remove sound from my ebook, I wanted to try placing simple locational text alongside the images to see whether it encouraged a pause on the page, or would just be distracting in the context of my work and in ebook format.
I placed text on all pages to take in the effect fully. While it encouraged a pause, it was also a distraction from the image – the form of an ebook is more closed than paper. This is a contrast to a small amount of text on the opposite spread of a paper book as something I find unobtrusive and even useful.
I will move the locational text to a separate page after the photos.
InD learning has continued and I’m finally developing some familiarity areas that are difficult to penetrate. I’ve been working on simplifying the layout of my draft book by dropping the facing pages layout and adding the possibility of viewing facing images as full screen images on their own (like a gallery or close look into a physical book spread). Also some other cosmetic enhancements.
I went through various iterations of trying to get the gallery view working and when I eventually thought I had, it didn’t work when exported as an ePub or online using an iPad. It seems the touching requires a different design approach to mouse-clicking. After more trial and error, I found that a multi-state object containing the images and a separate button (I used simple text) to move the object through its states worked both with mouse and iPad (hopefully other tablets too). I persevered with this as I agreed with feedback that the smaller facing images were not always easy to view on screen. The gallery seems a good way of allowing certain images to interact, while also allowing the viewer a closer look.
For A3 I simply resized images to target monitor viewing at full-size and didn’t experiment with jpg quality. The ePub file size quickly bloated and there was some evidence of lag when using Adobe’s online publish facility. I realised that this was something that would require further thought and I perhaps have a mental block when it comes to deliberately degrading image quality.
The target output is important – for the purposes of my ebook, I am initially focusing on monitor displays (as this will be the assessment platform). However, if I later produce a book for iPad (arguably the only mobile screen appropriate for viewing ephotobooks), I will also need to reconsider image optimisation.
From my research, there are two areas that require some practical research.
I currently use Lightroom user presets to export images to a size / quality and think this is probably adequate for online purposes. I use Photoshop when working with printed output. However, apparently Photoshop’s ‘export for web’ allows for the previewing and comparison of up to four different export settings. This is ideal for testing what Jpeg quality / file sizes are optimal for the ePub. I think they are currently over-specd. Something also important when I start to move images on to a microsite.
My image long-sides were targeted at just over 2000 px on the long-side. For iPads with retina displays, 2048 px on the long-sided is recommended by Apple (here – https://support.apple.com/en-gb/HT202751) so this seems a good compromise for monitor and iPad that would avoid the need to generate fresh images for an iPad version of my book.
There is plenty to be going at here to optimise the viewing experience. With so many variables at the viewing end (including internet speed) I need to do some testing an make some changes in this area.
I found ‘export for web’ useful for comparing jpg quality with various settings, using a preview of the resized image. Having examined the whole of the image ’80 quality’ works well for me – less than half the file size of 100 and little discernible different (though I think a little subtlety is lost in the sky details).
For image resizing I chose ‘bicubic sharper’, which apparently retains sharpness when image sizes are reduced. However, when exporting from LR there is no user choice of resizing algorithm. I couldn’t locate official information on this but read ‘Adobe Photoshop Lightroom resampling is a hybrid Bicubic algorithm that interpolates between Bicubic and Bicubic Smoother for upsampling and Bicubic and Bicubic Sharper for downsampling.’ (from – https://www.digitalphotopro.com/technique/photography-workflow/the-right-resolution/2/, who seems to have had technical involvement with Adobe). In any case, a test export from LR at 80 jpeg quality resulted in similar file size to PS’s and a good quality image.
The final point I considered was pixel dimensions. If 2048 on the long side is recommended, there is also a maximum short size to fit on a 16:9 screen. A quick calculation determines this to be 1152. Depending on the dimension of the image, it should either be restricted to 2048 long side or 1152 short size to be optimised for full screen viewing on 16:9. I found with my uncropped images this turns out to be 1152, which gives a small additional reduction in file size. I’ve set all this up as a LR export preset for future exports for ePub purposes.
Different file types / compression – https://matthews.sites.wfu.edu/misc/graphics/formats/formats.html
Optimising images for ebooks – https://blog.kotobee.com/optimize-images-ebook/
Having made an ePub with basic interactivity for A3, I’m researching other functionality that could be useful. A few people commenting on the draft mentioned they would prefer to see full size images, rather than images sharing a page. For some images, I want them displayed together because their interaction creates an additional meaning. However, I understand the frustration of not being able to easily look closer at an image – something we can do instinctively with physical materials.
A solution to this in an interactive ePub is the interactive button – the images are converted to buttons and actions programmed that are triggered when a user clicks on an image. Using different sized images on a page, converting them to buttons, and using the ‘hide until triggered’ option gives the possibility of a user clicking to view each of the adjacent images full screen. So the best of both types of view.
This is something I’ll incorporate in the next draft.
I’ve been here before – spend a while away from printing for one reason or another and up with clogged print heads. Last time it happened, I promised myself I’d at least print something a couple of times a week but I’ve neglected it again.
After a few head cleans and print purges of the cyan, I’m still not getting a good print head test for that colour. I’ve spend the morning watching an reading about printer maintenance for an Epson SC P600 and note a few things I need to do regularly. Really should be part of professional practice, keeping tools well maintained and ready to go.
Pigment based ink jets dry out if they are not used regularly and ultimately clog. The ink sets on the print heads and even print head cleaning won’s shift it. Like my cyan this time. I’ve order a cleaning kit and will use the approach suggested by Marrutt (and others here – https://www.marrutt.com/find-my-printer/epson-surecolor/epson-surecolor-sc-p600-printer/epson-surecolor-sc-p600-printer-support#unblock). General advice seems to be to print at least twice a week, or if you’re unlikely to do this, don’t buy a printer in the first place.
Other parts of the printer also need maintenance and I’ve never done this – cleaning the paper feed mechanism, cleaning the printer’s head wiper blade, and cleaning the spill pads (used when printing full page images).
Marrutt provide rather dry videos on the subject but they are to the point. For a more conversational approach Jose Rodriguez’s YouTube channel is full of useful information, providing one has time to listen to the chitchat – https://www.youtube.com/channel/UCz9YXaSulpM90vC24lmAeZA.
I’ll check in with myself monthly to see how I’m managing my printer discipline; reminder added to phone!
Update – eventually managed to clear print head blockage using cleaning fluid and jay cloth cut down to put on platen under print heads (1st thin strip soaked and left overnight, then folded strip used to physically wipe by moving print head over it). Now I need more ink and was shocked to find the Epson EOM inks are now over £200 for a set. Going to revert to trying Marrutt refillables that are on offer and £155 for almost three times the quantity of ink. I tried them unsuccessfully in the past but it could have just been my inexperience as they do seem to be very well reviewed.
Frankly, this has been a bit of a nightmare! I previously wrote about my plan for exporting the ebook so it was readable on several platforms. It didn’t really work.
One problem is that multimedia PDF is not being supported beyond the end of 2020 (per Indesign warning on export) and doesn’t reliably enable reproduction. This seems to be connected with flash player based content – effectively a deprecated technology. ID includes other legacy functionality, but unless aware of this, it is easy to come unstuck. For example inserting audio using the simple media player and built in icons is also unstable – particularly on Windows. It took hours of messing around to find the problem and then finding out how to use button objects in InD.
However, the 16:9 format seems to work well enough in the web-browser of any device (even the tiny screen of the iPhone) – so publishing and hosting on Adobe’s servers for online viewing works. It can be laggy on slow internet connections but there’s little I can do about this apart from exporting at high (rather than highest) resolution settings – I don’t want to put low res files up.
To overcome the PDF issue (for a downloadable book), I’ve reverted to fixed layout ePub. I’ve successfully exported in 16:9 format, which is good for using ebook readers on a monitor. The limited interactivity I’ve used so far works (for me at least). This would be a backstop for any issues with internet connections when viewing online – ultimately, I’m thinking about assessment. Unfortunately, my efforts at using InD alternate layouts and targeting the 4:3 ipad format came to a grinding holt – my media buttons simply would not stay on screen, despite anchoring, grouping and swearing loudly. I’ll have another go now I’ve moved to button objects – perhaps in the next iteration of the project.
The ePub format seems necessary for offline viewing, with the demise of interactive PDFs. I’m reluctant to try other tools like Powerpoint as its layout and formatting possibilities are relatively limited and it’s not conceived as an online publishing tool – I hope to eventually become a competent InD user as I do have an interest in making books and photobooks. InD’s alternate layouts offer the possibility of efficiently turning an ebook into a paper book (and visa versa). However, the nice page turn effects are application/device dependent – so the iPad and Apple Books is effective. I’ve not seem similar effects on laptops – the ebook does end up looking disappointingly rather like an interactive slide show.
In conclusion, I’m sticking rigidly with 16:9 format for online publishing through Adobe and ePub for offline backup. For the next iteration I’ll revisit the iPad. It’s been a huge learning curve, including remembering things I once did without thinking – there’s little hope of just reviewing the book on a large photography monitor and expecting to get the font size right! Thankfully, I’m left with some appetite to explore further interactivity, but also mindful that I want the book to be a quiet experience. Perhaps other stuff might end up on the microsite, which is also in my next phase.
When I put my draft ephotobook out for student feedback, a technical issue with displaying the online published version was identified. I’d tested it on my MBP and I’d tested the ePub download on a iPad, but the online version wasn’t displaying correctly on the iPad (not opening to fill the full screen and not good). I’d used ID’s own iPad Pro sizing for the book.
I’ve done some research on fixed flow ebook sizing and summarise here.
Different mobile devices have different screen dimensions and my own ebook will hopefully be viewed on a large computer screen more than a mobile device. Rather than sizing specifically using an iPad preset, it is better to size for 16:9 as a ratio that works better across different devices (including conventional laptop screens). For side by side pages, this becomes 8:9.
‘You could be forgiven for assuming that setting your page to be the same size as an iPad screen or Kindle Fire screen is all you need to do. However in order to create files that retain quality and definition when the reader zooms in, both Amazon and Apple recommend that you produce pages larger than the actual screen size.’ (ebookpartnership). Glad I’m forgiven and surprised that ID’s presets didn’t allow for this. Apparently Amazon recommend double the pixel size of the device (to allow 2x zoom) and Apple 1.5x. Painfully, this also has implications for the sizing of my image files that were resized to an iPad’s pixel dimension. I need to find a compromise for laptop screens too (where I wouldn’t see a need for zooming in). The same guide recommends a long side of at least 3840 pixels. My ID preset has 2224!
Page numbering – I noticed that the device reader’s own page numbering is different to that I’ve put on the pages. I paged as I wanted the numbers displayed as a paper book – it is better to page as they will display in the ebook reader (ie cover is page 1). Another example of an ebook being a different animal.
I’ve also thought more about the format of the ebook. My original idea was to make it available as an ePub, but there may be disadvantages of using a fixed layout ePub against a PDF, given I’m not planning to sell the book online. The main one being that the ePub is device/application dependent for the viewing experience, whereas the PDF is not. This would seem to offer more control over the viewing experience. The other consideration is the use of ID’s online publishing – essentially this is a web-based viewing experience and if many people are going to simply view online, formatting for that space perhaps needs to take precedence. It does give the option of allowing the viewer to download a PDF directly from the online view, which could be a neat solution for my purposes.
Having researched and thought this through, I’m going to try the approach of sizing for online display at 16:9 with long-size pixel dimensions of 2560 as somewhere between HD and 4k. I’ll aim to create the illusion of a spread, so will split this into two 8:9 facing pages. I’ll then test how well ID exports from its web viewer to PDF (including multimedia) – viewers can then download if they wish to / I can include the PDF in the submission for the work (as an alternative on online viewing, which can be affected by poor internet connections).
The outcome / any adjustments will be included in my submission for A3.
After many hours in front of Indesign and Photoshop, I’ve finally made progress with the first edit of my ephotobook and will share for comment before the end of the week. Here I reflect on the experience in the hope of less suffering next time around.
It’s probably over a year since I used Indesign having gone through a period of making zines and ezines. As for most things I haven’t used for a while, I feel a rustiness and a lack of fluidity. The technical stuff is important – if I don’t feel on top of it, it can frustrate the creative work. I’ve said it to myself before but I must use these tools regularly to avoid the pain of refreshing and relearning.
Just as a photograph is a new and separate reality from its referent, so a ephotobook is to a photobook. I was able to create more readily once I’d really understood this. For example:
While it has ‘spreads’ they are very different to pages in a paper book. With paper, we are sure of its form and its mechanics of use; pages are turned, spreads reveal, the gutters obscure. With an ebook, different reading applications give different viewing experiences (eg page turns or spread viewing). If your spread doesn’t open as a spread, your images won’t make sense. Eventually, I though of a ebook spread as a single page, but worked to create the illusion of a spread as a useful and familiar layout form.
Screen space (particularly on mobile devices, like iPads) is limited compared to a book. There is a temptation to make all images fill the screen so they can be clearly seen. Then the book just becomes like a slide-show, which lacks the visual nuances and rhythms through layouts. Once I’d let go of the significance of the gutter (it is just imaginary for an ebook), layout possibilities seemed to come more readily.
Attachment to photographs can hinder a good ebook layout – I’d spent time working the images within their frames and was attached to showing the whole of each image. After putting this aside, there were more options for how to place photos together and work the layout.
Attention to detail is vitally important and even that sometimes doesn’t save you from frustrating rework. With the book, the photos are the ingredients and if they are not quite right, it becomes clear when eating the cake. Looking a photos closely as a book develops and in context next to other photos reveals flaws. Despite looking closely at images before exporting for ID use, I found some problems later. It does suggest I need to be more rigorous when finishing photos in post. I won’t even get into the pain of cover design and text!
The multi-media possibilities of ebooks add a dimension that possibly compensates for their lack of physicality. There can be user interaction with that. So far I’ve added soundscapes that the viewer can activate, a piece of piano music for the canal, kindly composed and played by my son, and a map of the canal’s route. I also plan to add a video interview with myself (once the work has progressed further) and a link to a microsite for the project.
Next, I need to revisit what the work means to me after the additional shoots and spending time with the draft book. This will inform the brief foreword to the book.
You have to live with things a while before deciding whether they are a good fit. I decided to change the project name to Slow Water Tales when I re-engaged with my BoW. There was always a slight niggle that this would sound like a children’s book – often seem to be called ‘tales’ and of course there is Tales from along the River Bank! When out shooting last week, I came across some graffiti on the boards around a building site – ‘Air Land Water’. I think it’s a broad description of what the canal is about without being directional. It would perhaps need a subtitle. I have an image to include in the BoW with the graffiti and as a plus, the domain name airlandwater.co.uk was available and I bought in a LCN sale for £1.20.
The text in the header was extracted from the photo in photoshop. I’m going to live with this name for a while and see where it takes me.
For a while I’ve been thinking about making a book of my BoW. However, a significant part of the experience of viewing a book is tactile and with the OCA’s move to digital only assessment this would be missing. I can only assume that this approach will continue for the foreseeable future. If I made a paper book I would be left with videoing a page turn through it and probably also submitting a digital version in any case. At this time making a paper book for the purposes of OCA BoW assessment doesn’t feel like a worthwhile endeavour. While I will make one at some point, for now I’ll focus on making an ephotobook.
Online research offered nothing specifically about ephotobook design, though there is good information about photobook design and there are paper photobooks to view. The importance of space around the photos, and sequencing and pace are emphasised. The ideas of gestalt, using double page spreads to display two photos together are powerful. There is the tactile experience of holding the book and the choice of materials that helps to make that.
However, some of this ideas don’t translate as I expected to ephotobooks. I experimented making an ebook in Indesign. A key finding is that the ebook experience is device dependent – particularly problematic when working with spreads. A spread might work well when previewed in Indesign or uploaded to their online site and viewed through Adobe’s platform. However, it quickly falls apart when viewed on mobile devices (I tested on an iPad) where the spread might not display as a double page (reader app dependent) and if it does, the images are simply too small to read and it becomes an annoyance. Particularly, if a single image is place across two pages! I learned that spreads must be designed to also work as single pages for the ebook to be portable between platforms. Back to the drawing board for v2.
The ePub format has different qualities and some advantages over paper through its interactivity. It seems important to explore this and play with the ideas of an ebook being a different experience, rather than a compromised experience to a paper book. For example a link to an online map or inclusion of sound files could be tried. The sounds of the canal could make an interesting accompaniment to the images as it is its relative quiet and separation from its immediate environment that is important to the place’s ambience. There is even the possibility of including video.
From my first experiment, it has quickly become apparent that I need to think of an ebook as different to a paper book, not a simple replacement. Otherwise, the ebook just becomes a lesser paper book.
I’ve been guilty of mostly editing on screen in the past – placing my images in a Lightroom collection, shuffling their order and flagging / unflagging as I went along. It was convenient when I was travelling and also saved ink and paper!
I now have a good number of photo resources for the project, so I’ve broken free of the screen and printed draft images to shuffle and get an in-one-take view of how things are looking. I’ve known that this is the best way to do it but it’s the first time I’ve printed lots of draft images. I set up LR print module (single image/contact sheet format) to print 4 images on an A4 sheet in ‘draft mode’ (whatever that does) and it runs through all selected images, renders and prints them in one go.
Even with draft prints, it is much easier to see which images don’t quite fit or need reordering. I’m a convert. I’ll also use the same prints to play with book layouts.
Some of my images contain the level of detail that is typical of landscape photographs. It’s not an area I have focused on in the past and I’ve been struggling a little with selective sharpening in Photoshop – my default tool is the unsharp mask, but I’m not happy with the results for some of the landscape details; I see hints of haloing and then when backed off, the details are not as sharp as I’d like.
This morning I looked at the newer ‘smart sharpen’ filter and am happy with the results.
Screen grabs (unfortunately not the same size) that show the difference. I’ve learned that smart sharpen is better at detecting edges (based on lens blur) than the unsharp mask based on gaussian blur. The control over fading sharpening in shadow and highlight area helps in automatically refining the areas for sharpening.
I’ve found that using smart sharpening on a smart object layer, with a mask to hide sharpening on areas of the image (eg background) gives the selective clarity to the image I was looking for.
I’ve been thinking conceptually about post-production partly because of my look at Nadav Kandar’s work and partly because I’m aware that some of my own work is taking the prints someway from what I saw. Helpful to my reflection were comments of fellow students by email and on the Discuss Form (discussion is here – https://discuss.oca-student.com/t/postproduction-and-the-dark-line/12369). A particular important concept mentioned was deception. We don’t like to be deceived unless we give permission, for example in the context of the arts like cinema. But the multiple and varied uses of photography can confuse the context for the viewer. As I approach the end of my studies, I feel it is important to succinctly articulate my own position on aspects of photography that can be contentious. Of course, these views may well continue to evolve. So, on postproduction:
Photography is a tool that can be used in a wide range of contexts. In some, such as news photography, being true to what is seen is important. The use of tools like Photoshop to alter images can be contentious but like photography itself, needs to be taken in context of its use. In some of my work, I use Photoshop to enhance images from beyond what I saw to what I imagined and felt. I rarely add new elements but emphasise or disguise certain parts of images. I might sometimes remove distracting elements.
My BoW tutor suggested I look at Nadav Kandar’s The Dark Line, as a contrasting interpretation of the same estuary as in Frank Watson’s work.
Kandar’s project is shown on his website (https://www.nadavkander.com/works-in-series/dark-line-the-thames-estuary/single) and Photoworks has an interesting interview with him about the work (https://photoworks.org.uk/interview-nadav-kander/). Whereas Watson’s interpretation of the estuary is quite literal, Kandar’s is more about the creation of atmosphere and the artist’s reworking of the referent to express something of himself.
It is this different approach to the process of photography that interests me in the comparison between the works. In the interview Kandar comments:
NK: It’s in my studio that the most decisive moment of this process takes place. You can’t make a great print without a good photograph but I must say that for me it’s not in the picture taking. There’s a lot of layering of colour and weight, and the editing and printing process is what takes these prints a further distance than the photograph itself.
This is particularly evident in some of Kandar’s works that take on a painterly quality. While he may not have artificially added new elements to the photos, they are worked to the extent that they possibly bear only a passing resemblance to the unprocessed image. Whereas the images in Kandar’s work on the Yangtze appear worked to a lesser extent, in interviews with Kandar, it is clear that he views his work as an expression of how he sees and feels, including in his portrait work.
There is an artistic decision in the extent to which postprocessing possibilities are used, and this has become more accessible with digital images and tools. I explored the possibility of working images when I was restricted to iPhone photography for a long period. I enjoyed this work and expressing how I felt about subjects. When I work now, I use Photoshop heavily after a period of abandoning it for Lightroom and very straight work.
An example of an unprocessed file and the processed output are below to illustrate.
The colours and areas of focus have been worked to make the image visually compelling. Some might say that the resulting image doesn’t look like it did in reality. However, that is not relevant as my work is not intended to document reality but add my own interpretation and create visual interest.
What I’m still working on understanding is where the line falls in image enhancement. I suspect that there is no fixed line, but it is a combination of personal voice and appropriateness for the subject matter. Kandar’s more abstract estuary work seemed to allow him more licence than the Yangtze with its obvious human and man-made elements. It is clear that a similar treatment needs to be applied across a series if it is to maintain coherence. This has significant implications for workflow and decision making if extensive rework of individual images is to be avoided.
I struck up a conversation with Sandra and Alan by commenting on a newly started greenfield housing development alongside the canal at Gargrave. They’ve lived on the canals for 34 years and are still in lockdown – The Canal & River Trust (CRT) have ordered that the locks should not be used during Covid lockdown and have also advised to minimise use of the canal towpaths by walkers. There are large numbers of moored boats where one might normally see very few.
I found out useful things through talking with them. They were careful to say that their experiences cover many years and things may have changed since they last visited places.
The Leeds & Liverpool Canal is generally quiet (not like some others they visit). I asked what about Liverpool end, as I’d not yet visited it. Apparently, until 5 years ago a police escort was required to take a boat into Liverpool, as vandals would throw bricks at the boats. This surprised me and I made a mental note to be cautious when I venture to that end.
It is difficult for boats to get on and off the canal – either the Aire and Calder Navigation, which is tidal and difficult for narrow boats that were not designed to be steered in fast currents. Or at Wigan, which I was told has a flight of locks that are painfully slow to pass through. I silently wondered if this is one reason it isn’t populous.
What about Rochdale Canal I asked (since it is within driving distance)? They don’t go there after an experience many years ago when they were threatened by drug dealers and advised by the police that it is best treated as a no-go zone. Another mental note to be careful if I venture onto that one!
I asked what they like about the canal, was it just the travelling? They have met some wonderful people over the years and enjoy the freedom of wandering. They don’t necessarily move everyday, but when the feeling takes them. It is difficult to move freely in the winter months as many locks are shut for repair. They tend to moor near Gargrave (where they have family) and find work for the winter.
Sandra and Alan’s appearance was similar to many along the canal, sporting ‘outdoor clothing’. However, their willingness to talk did make me think that even if I don’t manage to find people that are visually interesting, it might be that I can bring people into my work through their words.
I’ve done more thinking about Paul Graham’s visual signalling through overexposures (https://oca3.fitzgibbonphotography.com/american-nights-unseen-landscapes/). For landscape, the banal is obscured by the picturesque in popular culture. For the canal, the water and its reflections connect to the pastoral as it flows through the open countryside. I could use water to obscure my images of the banal to mark them in stark contrast to the deliberately picturesque images, rather than A2’s picture in picture approach. It would be a visual play on them being unseen and obscured by the picturesque.
After some experimentation, my first attempt is below.
A number of similarly treated image would be interspersed with the picturesque. For example …
Then, borrowing from Paul Graham, the series would be concluded with a few fully visible banal photos. Hopefully drawing the viewer in to give them full attention.
After the Zine workshop yesterday (https://oca3.fitzgibbonphotography.com/red-eye-online-zine-workshop/), I made a Photoshop template by dividing a A4 blank document using guides, drawing squares for the pages and converting them to smart objects so images could be added to size later. I worked out the orientation for the numbers by making a blank x-book, numbering it, and then opening it out to see the numbering on a flat page; used this to add numbering/orientation in PS.
I made a quick x-book with some images from A2. It’s the featured image for this post. I wasn’t aiming for a high quality output, but something that I can keep to hand and reflect on the development of the work. It is already helping!
This week, I enjoyed an online Zine workshop hosted by the RedEye network and delivered by Shy Bairns (artist collaborative). Good to see a couple of other OCAers there too! It was a 30 minute demo of making a folded book. Similar to the one in the embed below.
I’ve made these in the past as well as bound books, so I learned nothing new. But it did remind me how much I enjoyed making these things and that one of my intentions when beginning BoW was to produce tangible items, rather than just digital images or straight prints.
I’ve dug out my reference books (making handmade books, Alisa Golden and Self Publish Be Happy, Bruno Ceschel) to read through on this wintery summer’s day and will be making something while the weather keeps me indoors!
The materiality of water was discussed in my dissertation – as a connecting flow between people and places and in terms of an existential relationship with humankind. I had the idea of using a picture-in-picture approach to connecting water visually to what I photograph on the canal. Water, becoming a connecting theme visually, as well as a connecting sociogeographic aspect.
An experiment involving Iron Bru is below.
Perhaps this is an interesting way to showcase the important spoils of activity connected along a watery flow. It also allows for a less straight approach to the work, which I find more difficult when dealing with a ‘natural’ landscape.
Could be a way forward for A2. Will need to work up a PS template file to ensure a consistent layout.
‘We suffer from the tyranny of linear perspective’ and it is as problematic unpicking meaning from what we see in the world as it is from representations in images. I’ve explored in words the idea that space is shaped from culture and is contested and normalised by dominant narratives. How might this be represented in images?
I’ve experimented with the idea of unpicking meaning through composite images. This is as much about testing process as outcome at this stage.
I’ll put this out for comment before taking it further – could be an approach to A2 BoW as a play on the thinking in my CS A5. These test images are still more or less straight images – I’ll experiment further with more extreme collages. There is perhaps a connection to Dada in the thinking – a protest against the normalisation of space?
I’ve been on a long break from photography, while enduring a long commute to a client for a three month project that turned into six months. This left me with little available time, other than that found during train journeys. There was time for reading when the quality of the Northern Rail carriages allowed and watching downloaded films or television shows. There was also iPhone photography (#iphotography), some of which I posted to my @snappedpixel Instagram account.
I’d already been reflecting on straight photography and how much of it lacks visual reward – a shot of something banal that is justified by an unseen concept (and unknown, unless context is provided). Andrew Conroy’s presentation to an OCA North meeting made me question further – his work contained a mix of straight and ‘manipulated reality’ (Fixing Shadows); when asked which he enjoyed more, it was the manipulated works. Talking to painters after Andrew’s presentation they confessed to being generally non-plussed by some straight photography, observing that without knowing the concept, they simply didn’t find it interesting.
The academic blog Fixing Shadows contains a discussion on straight photography and what its opposite might be – how to define it without the pejorative reference to pictorialism. The general view was that it is far easier to define ‘straight photography’ than its other. One respondent noted he found it a shame/amusing that the photographic world had been come obsessed by such questions, and would benefit from spending less time debating and spend more time making images.
I decided to experiment with ‘manipulated photography’ using my iPhone and the icolorama app which allows a high degree of control over manipulation (in contrast to the automatic filter based apps).
I found the process of augmenting the captured ‘straight’ shot to convey my internal perception rewarding. In the Leeds train station shot the red and blue tones are exaggerated as is the blurriness of the steamed window. I experimented with other manipulations that are shown on my Instagram feed. My conclusion is that as long as one begins with a pre-visualised manipulation it can be used to express an artist intent through photography – though there is also room for the unexpected ‘accident’ in the process.
I’m about to return to camera based work for the course, and will not be as attached to straight representation after the ‘iphotography’ experience.
After some research and a discussion thread with other students, I decided on the Zotero application for my research folder (required for this course and to be submitted as part of assessment). Things I liked:
Free and widely used across the academy
Integration with WordPress.org sites (through the Zotpress plugin) – note that it is not available for WordPress.com. So generating any inline citations can be done from the research folder, ensuring it stays complete. Also automatically generates bibliography in a post based on the citations.
Large attachment files can be linked rather than using up limited free Zotero cloud space (I’m using OCA GoogleDrive for any large files). This should avoid upgrading Zotero cloud storage, but it is not expensive if needed in the end.
Extensions for web browsers that allow links to online resources to be automatically sent to research folder.
Plugin for MS Word for essay citations and bibliography
Built in formatting for UCA Harvard referencing
Ability to set research library up as a ‘group’ so that tutors and assessors can be provided a link to access online, including any attachment documents.
A rough diagram of my configuration that I used to help my understanding during set-up:
Update note (diagram): after some use, I decided to use Zotero’s own cloud storage for scanned documents. Attaching reduced sized pdfs should make the free storage sufficient for my purposes and if it proves necessary the first level upgrade is just £20/year. I have more faith in maintaining the integrity of my online research folder this way, than spreading attachments to a Google Drive.
Here is an example of an intext citation using the Zotpress widget . A short code link is generated through the Zotero Reference widget (installed into sidebar) from a search of the Zotero research folder.
This is automatically generated from the citations in the post, using a shortcode