Post-tutorial reflection

Project 8 is behind me and there are only two more before the end of this penultimate unit, so this is as good a time as any to take a look back at how things have gone.

Key points

  1. Second Life – two exhibition venues, two talks to audiences, development of 3D building skills from manipulation of a basic cube to design and construction of two galleries.
  2. Artsteps – four galleries, three self-built, one exhibition of peer group work.
  3. Application of creative writing to gallery labels.
  4. Development of skills in the application of several technologies supporting displaying of work, extending of work with digital adjustments and augmented reality.
  5. Significant development of painting technique, confidence, and sense of capability in imaginative extension of photographic references and ‘editing’ work in progress.

Progress points

Many of these relate to the limitations of the technology I’m using or my understanding of what else they can do.

The simple apps such as Lightricks MotionLeap and PhotoLeap have a limited range of options, as does PhotoMirage for desktop so, while every user and every individual use will have different ways of applying the options, eventually it becomes apparent that the outcomes are very similar.

The addition of greenscreen windows through which to view the results however, brings another dimension, and putting the final video into Artivive allows for 3D objects and animated layers to be included in the viewer experience.

At present, my access to 3D models is limited as I don’t have the skills to make them and the best are understandably expensive. They would not, in any case, work without a form of activation that places them somewhere on a target image in the real world and Artivive is, to my mind, the best available despite having limits to file sizes in their assembly bridge. It is also much more viewer friendly in that it goes straight to the AR without requiring any further clicks to a website or QR code. So while I can put 3D animations in my videos, these are not realised as layers outwith the video layer. That can only happen via Artivive.

My actions with regard to moving this on have been to access 3D modelling YouTube videos, Procreate video production (which itself was a revelation and my favourite is Worm Dad), and tutorials put out by the makers of the apps I use. This includes the Cyberlink suite which has desktop applications for the making of audio tracks (AudioDirector), manipulation of images (PhotoDirector which also has animation capabilities), video (PowerDirector), and colours (ColourDirector). I have used all of these except the last so it’s on my list to explore.

Some apps are for mobile devices only which limits their ease of use due to screen size and the sensitivity of drawing implements. Others are for desktop use which gives me plenty of visual room to work. So far I haven’t found an animation app that can be imported into another animation or photo-colouring app and retain its own applied effects.

It’s hard to see the 3D issue being resolved without an intermediate app such as Artivive as the film industry has struggled with this without much popular (and by inference technological) success. Greenscreen though allows me to project images through other images and Artivive allows me to put these on buildings where, if there is a mobile signal, the Artivive app will activate the AR layers.

I have said elsewhere that using Artivive in galleries generally gives rise to AR layers that map onto their targets quite closely, while AR activation in the wild, as it were, seems subject to different pressures. In these circumstances the AR can seem to be at an angle from the target, in front of it, or flickering in and out of capture (which gives rise to many MANY counts of hits on the dashboard!). This is despite careful positioning on the Artivive deck. I have asked the engineers why this happens, what forces are influential and how to minimise them, but to date there has been no response. However, while this app was made initially to serve artworks in galleries, increasing numbers of artists are taking it onto the streets and so I would hope stability might soon become a more prominent engineering issue.

As a point of interest, I had wondered whether the AR would work at all in Second Life. The idea, when I thought about it, of expecting Augmented Reality to map onto a Virtual Reality object was mind boggling and I can only guess at the physics going on there. But it did, and it behaved in the same way as the outdoor activations – often askew, sometimes out front, sometimes almost sideways on, and often flickering. So it seems that gallery or other contained conditions (anything in my house works well) represents stability for the app while anything other than this has adverse effects. Whatever that difference is, and there must be something that’s identifiable and measurable, this is where the instability comes from; and if it’s identifiable and measurable, presumably there is some way of developing a compensatory tweak. Can you tell I’m not a programmer or a software engineer?

SCH 2024

Leave a comment