Computational Science Community Wiki

The Manchester UK ACM SIGGRAPH Professional Chapter

Currently from November 2014 the chapter is de-chartered but it may be revitalized by completing the viability plan located at:

A Professional Chapter is founded in accordance with bylaws: And after eight years now this chapter is in Probation and roles passing on to future other chapters. We thank all sponsors and contributors over this period: Martin Turner.

Support received with thanks from Advanced Interfaces Group and Research Computing Services at the University of Manchester and

and presentations broadcast globally as well as recorded via the Access Grid see:

ACM Europe Chapter Workshop

1st Meeting 12-13 Janury 2012 /AcmEcw

SIGGRAPH Conference Reports

Newsletter Article: SIGGRAPH 2011 Vancouver /NewsLetter11

Newsletter Article: SIGGRAPH 2010 LA /NewsLetter10

Newsletter Article: SIGGRAPH 2009 New Orleans /NewsLetter09

Newsletter Article: SIGGRAPH 2008 LA /NewsLetter08

Keynote Presentations / Events

Details of electronic virtual venue and Access Grid joining instructions are available at:

SIGGRAPH CAF Animation Session, Wednesday 12th December 2012 (12/12/12), 2pm-3pm Roscoe Theatre B, Roscce Building The University of Manchester

tn_anim12-p1.jpg tn_anim12-p2.jpg

We will be presenting some of the best SIGGRAPH animation video clips from the 2012 DVD collection. Animation is from professional houses to amateur enthusiasts. This meeting will not be transmitted across the Access Grid so you will have to travel to Manchester. img121212cscs.jpg

3D Visualization Event, Tuesday 6th November 2012, 10am-12noon Lecture Theatre 1.4, Kilburn Building The University of Manchester


Visualization Services Group will introduce VSG visualization and image analysis software solutions. Avizo and Amira are powerful, multifaceted tools used for visualizing, exploring and analyzing scientific and engineering data. They help engineers; scientists and researchers gain greater and faster insight into 2D/3D images and numerical simulation data. This is not an official ACM SIGGRPAH Chapter event - but will interest members.

Richard Stallman , free software movement. "A Free Digital Society (alternate title; What Makes Digital Inclusion Good or Bad?)" Tuesday 26th June, 5-6:30pm Theatre A, Roscoe Building (The University of Manchester)


Activities directed at including more people in the use of digital technology are predicated on the assumption that such inclusion is invariably a good thing. It appears so, when judged solely by immediate practical convenience. However, if we also judge in terms of human rights, whether digital inclusion is good or bad depends on what kind of digital world we are to be included in. If we wish to work towards digital inclusion as a goal, it behooves us to make sure it is the good kind. img3847rawcss.jpg

Dr. Richard Stallman launched the free software movement in 1983 and started the development of the GNU operating system (see in 1984. GNU is free software: everyone has the freedom to copy it and redistribute it, with or without changes. The GNU/Linux system, basically the GNU operating system with Linux added, is used on tens of millions of computers today. Stallman has received the ACM Grace Hopper Award, a MacArthur Foundation fellowship, the Electronic Frontier Foundation's Pioneer Award, and the Takeda Award for Social/Economic Betterment, as well as several honorary doctorates.

Hyperspectral Imaging Cameras, 4th May 2012, 9pm-4pm Lecture Workshop at The University of Manchester

This is not a ACM SIGGRPAH Chapter event - but should interest members.

A previous speaker at the local ACM SIGGRAPH seminars, is running a show-and-tell workshop for hyperspectral imaging cameras next month: Friday May 4th 2012; 9am-4pm. Images and description from the camera range are at: and you can register for the event at: These are effectively spectral cameras per pixel, and create a complete 3D data volume per image. They have been extensively used for geological application including as airborne devices. There are lab-based versions as well which can / could be used for a range of material or medical applications.

SIGGRAPH Animation Session, 14th March 2012, 2pm-3pm Lecture Theatre 1.1, Kilburn Building The University of Manchester


We will be presenting some of the best SIGGRAPH animation video clips from the 2011 DVD collection. This covers a unique set of animation from professional houses to amateur enthusiasts all of whom have excelled in excellence. Come and enjoy Wednesday 14th March. This meeting will not be transmitted across the Access Grid so you will have to travel to Manchester. p3545-3549p40s.jpg

Amer Alroichdi, Mapping Solutions Ltd. "Hyperspectral imaging: needs for real time processing" Friday 21st October, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Hyperspectral imaging systems have been increasingly in demand. The overall advantage of using hyperspectral data over digital photography and multispectral data is that it doesn't only go beyond the visible range, but also it provide continuous narrow bands that are capable of determining the physical and chemical composition of any pixel in the scene. However, due to the large amount of data produced during acquisition the lack of real time imaging processing is still a problematic issue. In addition, more algorithms need to be developed for target detection and monitoring.

Visualization panel for uAUug user group session - co-advertised through EGUK. 8th September 2011


9:00 - 10:00: Warwick University, uAUug (UK AVS+UNIRAS User Group) meeting: So Long, and Thanks for All The Viz A get-together with a presentation on AVS/Express over the last decade. If you need access to this event please contact a Chapter member - or fully register with EGUK event at TPCG11s.jpg

Next set of Random items from the SIGGRPAH 2011 Conference in Vancouver Friday 2nd September, 3-4pm BST Room 1.10, Kilburn Building, The University of Manchester


Repeating the previous years success another very fast slide show displaying a random selection of items as recorded at this years SIGGRAPH conference held in Vancouver. See links above to reports for past years conferences (2008-2011).

Tobias Schiebeck, Meik Poschen and Martin Turner, University of Manchester "JISC OneVRE Project: Creating a Secure Distribution Cross-Portlet for Sharing Electronic Documents" Tuesday 12th April, 2pm-3pm Room 1.10, Kilburn Building The University of Manchester


Website launch and demonstration: With Access Grid Technologies we can now join Portal based VREs. Using SARoNGS to authenticate users and provide VO attributes with X.509 proxy certificates to secure communication, this provides a Virtual Organization based access to Documents stored in the Venues. These documents can be securely shared; bit also expire in the shared Storage and then will be removed automatically. A key component is the joining of Portal environments using the lesser known Access Grid technologies in order to keep researchers within their familiar secure environment while reducing administration overheads.

More information: Try it out:

Snacks and drinks from 1:30pm

Martin Turner, Andrew Rowley, Tobias Schiebeck, Meik Poschen, Universities of Manchester "Informal Workshop: Towards Collaborative Spaces through OneVRE and other projects" Room 1.10, Kilburn Building The University of Manchester

Thursday 16th December 2010


Future Development Projects involving the Access Grid as a Research Environment. Including launch of site.


A show-and-tell event as part of the Virtual Research Environment (Video and Shared Spaces Related) OneVRE website launch, funded by JISC. A series of projects will be described including; improved screenstreamer (desktop projection), improved multi-stream video recording and combined playback, tweets in video conferencing, shared data sets across portals, synchronised stereoscopic display, wii controlled laser pointers, remote high-end scientific visualization, etc...

Michael Meredith, Sheffield University "Digging into Image Data: Answering Medieval Authorship Questions using e-Science" Friday 15th October, 2-3pm Room 1.10, Kilburn Building The University of Manchester

The topic of authorship is a common research question across multiple disciplines of humanities, arts and social sciences that unites researches from the field of computational image analysis: can adaptive image analytics attribute authorship and if so how accurate and computationally scalable are they when applied to diverse collections of image data? The DiD project is about collaboratively undertaking e-Science research and specifically for the Sheffield team (who are part of the larger project consisting of researchers from Michigan State University and the University of Illinois), identify the characteristic stylistic, orthographic and iconographic 'signatures' of particular scribes and artists, through the application of algorithms that include image edge detection, polygonal model fitting and geometric comparisons. High-performance computer will also be deployed to facilitate the processing of the large collection of high-resolution datasets. Further Information:

Random items from the SIGGRPAH 2010 Conference in LA Friday 3rd September, 2-3pm Room 1.10, Kilburn Building The University of Manchester


A very fast slide show displaying a random selection of items as recorded at this years SIGGRAPH conference held in LA. See links above to reports for this and past years conferences (2008-2010).

AVS/Express Product software roadmap presentation and AVS/Europe company visit "Extreme-End Remote Scientific Visualization" 20th May 2010, Room 1:10 Kilburn Building Mark Mason and Roger Fleuty, AVS Europe. George Leaver, Louise Lever, Lee Margetts, James Perrin, Tobias Schiebeck and Martin Turner, Research Computing

1:45pm - 2pm

Coffee and preparation

2pm - 2:10pm - 2:40pm

Intro and product roadmap update from AVS Europe

2:40pm - 3:10pm

MSc integration, EU IP Training and User Group examples

3:10pm - 3:30pm

ParaFEM Viewer

3:30pm - 4pm

National CRAY supercomputer, HECToR integration

4pm -

Time available for Discussion


Remote and extreme size scientific visualization techhniques. Discussion of work including: HECToR integration, ParaFEM, Erasmus funded Unstructured Volume rendering, and IP training and reports back from AVS user group in Switzerland and AVS in the CS Advanced MSc course. If you wish to meet the AVS Inc. team informally over lunch please email: Further details of software product range at:

Naty Hoffman, Activision Inc. "State of the Art in Game Console Graphics" Friday 14th May, 2-3pm Room 1.10, Kilburn Building The University of Manchester


The lion's share of game graphics R&D is targeted to consoles, where graphically intensive games sell the most copies. Although the hardware has not changed in the last five years, gamers still expect graphics to continually improve. Meeting this expectation requires efficient use of hardware resources and careful selection of graphics algorithms and techniques. This talk will describe the current state of the art, and go over the graphics techniques that have proven most useful in console game development.

Andrew Davison, Imperial College "Live Localisation and Mapping with a Single Camera" 30th April 2010, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Recent advances in probabilistic Simultaneous Localisation and Mapping (SLAM) algorithms, together with modern computer power, have made it possible to create practical systems able to perform real-time estimation of the motion of a camera in 3D purely from the image stream it acquires. This is of interest in many application fields, including robotics, wearable computing and augmented reality. I will explain the main aspects of the algorithms behind visual SLAM techniques, and present recent work which is now turning towards not just estimating camera motion but also recovering dense scene models in real-time. CREW Annotated Recording of the Presentation

Erasmus student presentations: Dresen Technical University, 22nd February 2010, 3-4pm Room 1.10, Kilburn Building The University of Manchester

Sebastian Starke: A Low Bandwidth Bridge for Portal Access Grid

Portal Access Grid (PAG) has made a step towards opening up the Access Grid to researchers across multiple disciplines. While PAG allows researchers to join Access Grid meetings from their desktop machines without installing any software, it still requires a high bandwidth network connection. This excludes researchers working on a remote site with limited networking access. For breaching most restrictive firewalls the PAG project produced a proof-of-concept software bridge that tunnels Access Grid data through Port 80 but afterwards it was proposed that a bridge based on the same principle could also allow the Access Grid to function across low bandwidth network points. This project aims to create and enhance innovative bridging technologies to; * reduce network load by selection of transmitted and incoming streams (AG room nodes usually send 3-5 streams one of which is normally a central (main) view of the speaker(s)); * downgrade the stream quality (reduction of resolution/frame rate) at the bridge; * use a single locally initiated UDP port for all traffic sent and received, reducing the number of ports that need to be opened in the firewall.

Tino Ernst: AVS/Express Unstructured Raycast Project

The aim of this project is to create a ray casting rendering module for unstructured meshes in AVS Express. Volume rendering in AVS/Express is limited to uniform meshes as there are plenty of standard algorithms to and optimizations in the public domain. Ray casting of unstructured meshes is more complicated as all the assumptions of the cubes and neighbourhood relations used for uniform grids are invalid. The project looks into the implementation of the ray casting for tetrahedron and hexahedron based cell sets. Initially the aim is to create an off-screen rendering method. Optimizations to get real-time rendering capabilities are not the primary goal.

Phil Cross, Nikki Rogers, Martin Turner, Andrew Rowley, Anja Le Blanc, Tobias Schiebeck, Meik Poschen, Universities of Bristol and Manchester "Informal Workshop: Towards Collaborative Spaces forming an Enhanced Research Environment" Room 1.10, Kilburn Building The University of Manchester

Thursday 17th December 2009


Virtual Research Environment CREW (Collaborative Research Events on the Web) Extension outcomes and results. Demos will be show the integartion within the Acccess Grid multi-site mathematics post-graduate lecture series (MAGIC as well as the MIMAS Intute data harvesting processes.


A Retospective of Development Projects involving the Access Grid as a Research Environment designed to share and enjoy. Projects include; eDance, CSAGE, memetic, CREW, ISBE, RACE, ViCoVRE, OneVRE, RoboViz, as well as unusual video conferencing sessions in Rm1:10 and the role of the Access Grid Support Centre.

"Access Grid - Next Generation Video Conferencing Development Projects at the University of Manchester" Thursday 17th December 2009, 2-3:30pm Room 1.10, Kilburn Building The University of Manchester


The session is being organized as part of the Virtual Research Environment for CREW (Collaborative Research Events on the Web) project funded by JISC ( This project aims to enhance a framework that allow video conferencing environments to act as a true research development system. Research Development Projects over the last five years will be demonstrated: choreographic annotation and multi-site projection (eDance), stereoscopic transmission (CSAGE), mind-mapping tools (memetic), semantic web event searching (CREW), repository meta-data (RACE), video conversion (ViCoVRE), multi-portal data sharing (OneVRE), robot command and steering (RoboViz), as well as unusual video conferencing sessions within Rm1:10 and the role of the Access Grid Support Centre.

"CREW - Collaborative Research Events on the Web: Extensions and Integration with Intute and the Access Grid" Thursday 17th December 2009, 11-1pm Room 1.10, Kilburn Building The University of Manchester


This morning workshop session is being organized as part of the Virtual Research Environment for CREW (Collaborative Research Events on the Web) project funded by JISC ( This project aims to enhance a framework that allow video conferencing environments to act as a true research development system. CREW aims to improve access to research event content by capturing and publishing the scholarly communication that occurs at events like conferences and workshops. The project is developing tools to enable presentations and similar sessions to be recorded and annotated and enable powerful searches across distributed conference and related research data. Demos will show the integartion within the Access Grid multi-site mathematics post-graduate lecture series (MAGIC, as well as, the MIMAS Intute data harvesting process. During this four month periood bespoke versions of the software have been customised and this has resulted in the archiving of over 4000 events and the storage of over 260 hours of presentations.

Fan 'Andy' Zhang, The Chinese University of Hong Kong "Parallel-Split Shadow Maps on Programmable GPUs" 4th December 2009, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Like any other visual effect in real-time applications, practical shadow algorithms should allow developers to trade cost and quality. As a popular real-time shadowing technique, Shadow Mapping however cannot achieve this goal in complicated scenes, because it does not adequately trade performance cost for quality. Parallel-Split Shadow Maps (PSSMs) are one of the most promising shadow mapping techniques to achieve this goal. However, without using hardware acceleration, the performance drop caused by multiple rendering passes prevents this technique from being extensively used in mass-market applications. In this talk, we show how to take advantage of modern Graphics Processing Units (GPUs) to improve performance in PSSMs. Furthermore, a few practical issues when integrating PSSMs into real games are also discussed.

SC, the Annual International Conference for High Performance Computing, Networking, Storage and Analysis sponsored by ACM SIGARCH and the IEEE-CS, is making a series of sessions at its 2009 conference available as online webcasts on November, Chat capabilities will also be available for viewers during the live video streams.

Tuesday Nov, 17, 8:30 AM - 10:00 AM PST (11:30 AM - 1:00PM EST, 16:30 - 18:00 UTC/GMT) The Rise of the 3D Internet: Advancements in Collaborative and Immersive Sciences Presented by: Justin Rattner, Intel Senior Fellow and CTO

Wednesday Nov, 18, 8:30 AM - 9:15 AM PST (11:30 AM - 12:15 PM EST, 16:30 - 17:15 UTC/GMT) Systems Medicine, Transformational Technologies and the Emergence of Predictive, Personalized Preventive and Participatory (P4) Medicine Presented by: Leroy Hood, M.D., Ph.D., President and co founder of the Institute for Systems Biology

Carlye Archibeque, 2009 SIGGRAPH Computer Animation Festival, Executive Producer "CAF (Computer Animation Festival) Screening" Monday 9th November 2009, 2-3pm Room 1.10, Kilburn Building The University of Manchester (Refreshments will be available to help the event.)


Carlye Archibeque will be presenting some of the best SIGGRAPH animation video clips from the 2009 DVD collection. This covers a unique set of animation from professional houses to amateur enthusiasts all of whom have excelled in excellence. Thanks also to the NASA/Goddard Space Flight Center Scientific Visualization Studio for permission to show some of their stereoscopic movies pano4993-4997_vs.jpg

Graham Fyffe, "Cosine Lobe Based Relighting from Gradient Illumination Photographs" Monday 9th November 2009, 1-2pm Room 1.10, Kilburn Building The University of Manchester


We present an image-based method for relighting a scene by analytically fitting a cosine lobe to the reflectance function at each pixel, based on gradient illumination photographs. Realistic relighting results for many materials are obtained using a single per-pixel cosine lobe obtained from just two color photographs: one under uniform white illumination and the other under colored gradient illumination. For materials with wavelength-dependent scattering, a better fit can be obtained using independent cosine lobes for the red, green, and blue channels, obtained from three monochromatic gradient illumination conditions instead of the colored gradient condition. We explore two cosine lobe reflectance functions, both of which allow an analytic fit to the gradient conditions. One is non-zero over half the sphere of lighting directions, which works well for diffuse and specular materials, but fails for materials with broader scattering such as fur. The other is non-zero everywhere, which works well for broadly scattering materials and still produces visually plausible results for diffuse and specular materials. Additionally, we estimate scene geometry from the photometric normals to produce hard shadows cast by the geometry, while still reconstructing the input photographs exactly.

Matt Mahon, CAVA JISC Project, Round table discussion meeting on audio-video archives and repositories Friday 23rd October 2009, 2-3pm Room 1.10, Kilburn Building, The University of Manchester

Toby Breckon, Applied Mathematics and Computing Group, School of Engineering, Cranfield University "Automatic Object Detection and Recognition from UAV Platforms" Friday 16th October 2009, 2-3pm Room 1.10, Kilburn Building, The University of Manchester


UAV platforms are increasingly being considered for search and surveillance operations both on land and at sea due to factors such as cost, convenience and reduced risks. However, the use of UAV sensor platforms in such scenarios has a significant overhead of the manual analysis of extensive aerial video imagery. Often this imagery consists of vast areas of barren seascape or farm/dessert lands with the seldom appearance of objects of interest to the operator. The manual screening of such imagery is both resource consuming and subject to human error. This issue scales with the number of such platforms deployed in any given search or surveillance operation. Here we discuss recent work in the automatic detection and recognition of vehicles, people and generic salient objects (e.g. crash wreckage) from UAV platforms. Work within this area is limited and we discuss the many challenges of working with UAV imagery and the additional challenges of experimental work with UAV platforms. Several research results from our work in this domain are presented including that from the MoD Grand Challenge 2008 which formed part of the Stellar Team's winning SATURN system.

Paul Miller, The Cloudofdata "Being at the Interface" Monday 12th October 2009, 2-3pm Room 1.10, Kilburn Building, The University of Manchester. From the website Paul Miller works at the interface between the worlds of Cloud Computing and the Semantic Web, providing the insights that enable you to exploit the next wave as we approach the World Wide Database. Not directly related to SIGGRAPH but an interesting aside. CREW Annotated Recording of the Presentation

Yoge Patel, Ian Cowling and Ken Wahren, Blue Bear Systems Ltd. "Unmanned Aerial Vehicle Systems" Thursday 23rd July 2009, 2-3pm Room 1.10, Kilburn Building, The University of Manchester


Blue Bear Systems Research is a successful and established SME focussed upon the research, development, trials and application of Unmanned Aerial Vehicle systems and technologies. BBSR is a member of Team Stellar, winners of the MoD Grand Challenge competition in 2008 with the Saturn system. The Saturn system consists of UAV platforms, an unmanned ground vehicle and a ground control station with in built automatic threat detection. This system therefore provides a portable solution for reconnaissance within the urban environment.

Veronica Sundstedt, Graphics Vision and Visualization Group, Trinity College Dublin "Look! - Eye Tracking in User Studies" 26th June 2009, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Computer graphics and interactive techniques allow us to create a vast amount of visual stimuli. Knowledge of human perception can affect the creation of these images and virtual environments. Over the last few years the evaluation of computer graphics stimuli has become increasingly important. There are many different techniques for evaluating stimuli some of which involve human participants. This talk will discuss how eye tracking can support user studies. Apart from using eye tracking as an interaction device in virtual environments, it can also be a helpful tool in usability testing and evaluation of algorithms and techniques. This talk will describe experimental methodologies based on case studies in computer graphics. We will discuss what additional data we can collect and analyze using eye tracking, which could not have been measured explicitly using questionnaires. We will also talk about how eye tracking has been and can be used in different application areas related to computer graphics and interactive techniques. The aim is to share some experiences and lessons learned when using eye tracking systems in user studies.

CRAY - The Supercomputer Company, 8th May 2009, 3pm Room 1.10, Kilburn Building The University of Manchester


Andy Mason - CRAY, will be presenting current installation examples, the HECToR national supercomputer upgrade route to phase 2 (400TFlops) and phase 3 (1PetaFlop?); as well as a roadmap of the new family of products. The RCUK roadmap includes speculative routes beyond 2013 onwards to 2020: ideas include not just a new general purpose supercomputer but also the opportunity to deploy specialist machines (graphics cards???).

If you wish to have a meeting with Andy Mason; from buying your own machine to speeding up code - please email and we will arrange a slot (

Mercury Computer Systems - Visualization Solutions, 24th April 2009, 12noon Room 1.10, Kilburn Building The University of Manchester


Jason Phillips, Account Manager - Northern Europe Visualization Sciences Group, Mercury Computer Systems, will be presenting a roadmap of the new family of visualization products available. Images from Mercury Computer System Inc., showing various Materials Science visualization using Avizo: Ceramic Processing – Microtomography, Corrosion evolution in aluminium aerospace alloy 2024 and Crystallography: Nanocrystal visualization.

SIGGRAPH Animation Session, 13th March 2009, 3-4pm Room 1.10, Kilburn Building The University of Manchester


We will be presenting some of the best SIGGRAPH animation video clips from the 2008 DVD collection. This covers a unique set of animation from professional houses to amateur enthusiasts all of whom have excelled in excellence. Wine will be available to help the event and we will donate £2 per person who turns up to Comic Relief. Come and enjoy Friday 13th. This meeting will not be transmitted across the Access Grid so you will have to travel to Manchester. crel_fullp_cs_vs.jpg

Visualization Day 19th February 2009 CS1.10, Kilburn Building, University of Manchester


Research Computing Services in conjunction with the UK AVS+Uniras User Group (UAUUG) and vizNET, is holding a Visualization Day on 19th February 2009. All welcome to see local cutting edge research and development visualization project results.

Further details and the agenda can be found at:

This meeting will take place in Room CS1.10, 1st Floor, Kilburn Building, University of Manchester (building number 39 on the University of Manchester campus guide). This is in association with uAUug. If you wish to attend, please respond to Mary McDerby ( so we know numbers for lunch.

Timo Kunkel, University of Bristol "Colour Appearance Modelling: Predicting how we perceive colours" 30th January 2009, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Colour in Computer Graphics is often assumed to be a trivial problem. Seen from a physical or photometric point of view this might be acceptable as it is possible to break down measurements of colour into physically describable elements like wavelength or energy flux. But these physical values alone cannot describe the vast amount of sensations our visual system is capable to perceive. At the moment photons hit the retina, complex processes like adaptation, compression, change of signal encoding or feedback loops are taking place starting from the photoreceptors towards the visual cortex. Understandably, describing the visual system in a model is also a complex task. Colour appearance modelling (CAM) offers a solution by describing the major processes occurring in the Human Visual System (HVS) by taking into account many factors which have been reported both by neuroscientific research as well as psychophysical studies. In this presentation we are going to talk about the use of Colour Appearance Models in Computer Graphics and adjacent fields including the current state of the art. We will discuss their applicability as well as their limitations when using them as a tool to gain a more realistic description of colour appearance. CREW Annotated Recording of the Presentation

SIGGRAPH Animation Presentation Day 22nd December 2008, 2pm CS1.10, Kilburn Building, University of Manchester. Pre-event

Martin Preston, Framestore"R&D in Film Production" 19th December 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


The role of research and development in film production has changed as the visual effects industry has matured. Whereas in the early years programmers were involved in every stage of effects production (from writing the renderer to building the film recorder) we now concentrate on much more specialised portions of the production. This talk outlines several areas of active development, and describes the sort of problems we still need to solve! CREW Annotated Recording of the Presentation

Manuel Lima, "VisualComplexity: A visual exploration on mapping complex networks" 12th December 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester

tn_lima_s.jpg (VC) is a unified resource space for anyone interested in the visualization of complex networks. With over 600 projects, the goal is to leverage a critical understanding of different visualization methods, across a series of disciplines, as diverse as Biology, Social Networks or the World Wide Web. This talk will leverage the existing pool of knowledge from VC to convey a current portrait of network visualization. It will illustrate some of its current trends and representation methods, and explore the reasons behind the recent outburst.

Bob Pette, Vice President, Silicon Graphics Visualization Group, "PowerVue and Other topics" 1st December 2008, 4-5pm Room 1:10, Kilburn Building The University of Manchester

"Bob Pette leads the Silicon Graphics Visualization Group, which is providing innovative solutions to help high-performance organizations work with rapidly growing volumes of visual information. In his 20-year career at Silicon Graphics, Bob worked directly with customers to design visualization solutions for industries ranging from aerospace design to energy exploration. Bob formed the Silicon Graphics Petroleum Technology Center to assist oil exploration, production and services companies with application benchmarking and development. As vice president of SGI Global Professional Services, Bob expanded the Silicon Graphics' visualization practice via the design, development and implementation of Reality Center environments, simulators, CAVE installations, and immersive auditoriums for the Louisiana Immersive Technology Enterprise, Air Force Research Laboratory, ADRIN (part of India's Department of Space), Oil and Natural Gas Corporation Ltd., Sikorsky Aircraft, and several energy companies and automakers. Throughout his career at Silicon Graphics, Bob has held positions in Systems Engineering, Application and Solutions Development, Customer Benchmarking, Customer Services and Services and Sales Operations. Bob received his B.S. in Aerospace Engineering from Georgia Tech and his B.S. in Mathematics from the University of Tampa." CREW Annotated Recording of the Presentation

Bill Sellers, Integrative Vertebrate Biology, Faculty of Life Sciences, University of Manchester "Running with dinosaurs: fossils, physics and physiology" 1st December 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Traditional techniques for reconstructing gait in fossil dinosaurs either involve complex animatronic machines or stop motion animation techniques. These techniques rely on a good knowledge of the skeletal anatomy and a familiarity with the range of locomotor styles seen in modern animals. These are mixed with a great deal of artistic skill and often produce visually stunning results. However using these approaches it is impossible to say whether the animal could actually have moved as portrayed. The movements used are anatomically possible but in all likelihood if the animal had actually tried to move like this then it would have fallen over. Even had it managed to stay upright it would not have minimised its cost of locomotion as living animals do. These difficulties can be overcome if we include both Newtonian physics and musculoskeletal physiology in conjunction with skeletal anatomy in our reconstructions. To do this we create a computer model of the musculoskeletal system of our target vertebrate fossil. The limbs and body are reconstructed as jointed segments, and the muscles and tendons are force generators that power the movement. This requires us to make estimates of various soft tissue parameters which are generally not preserved in the fossil record so we use a combination of phylogenetic and functional bracketing to estimate these values from living animals. The computer model is then imported into a physics simulator which solves the equations of motion so that the model moves appropriately given the forces applied by the muscles, by contact with the ground, and by gravity. Unfortunately such a model will not spontaneously walk or run so we use a genetic algorithm search procedure to find muscle activation patterns that optimise global parameters such as minimising energy cost or maximising speed. The end result is the generation of stable gait that is anatomically, physiologically and physically possible. At the same time the gait can represent an objective estimate of the most energetically efficient gait, or alternatively the fastest gait possible for a given animal. Sadly current technology does not produce gaits that look as good as the more artistic techniques and this new technique highlights the uncertainty inherent in all attempts at gait reconstruction. However ultimately it is a very powerful approach for the scientific understanding of dinosaur gait and we predict that as the technology advances it will also find a place assisting more artistic reconstructions. CREW Annotated Recording of the Presentation

Keith Merkham, MBDA Systems "Missing Bricks and Dead Pixels" Thursday 20th November 2008, 1-2pm Room 1.10, Kilburn Building The University of Manchester


The aim of this talk is to give an overview of the image processing work at MBDA and show how image processing links with other system elements. One example will be the monitoring of movements on a building site, and the way in which military technology can be transferred to civil applications. Another point that will be discussed is influence of image processing on the requirements for the camera sensor.

Jon Gibson The Numerical Algorithms Group Ltd The HECToR National Supercomputing Service and the Research Community 7th November 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester

Not a SIGGRAPH meeting but may be related see for details. Pdf file at seminar_jongibson.pdf CREW Annotated Recording of the Presentation

James Paterson, Eykona Technologies Ltd. "The evolution of an imaging system: Photographing weird objects for fun and profit!" Friday 17th October 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Dr James Paterson, Chief Technology Officer of Eykona Technologies Ltd and formerly Oxford University Robotics Dept, will talk about his research into 3D imaging and how it both led to, and was led by, the spinning out of Eykona from the University. James will show how continual themes of camera localization and photometric / geometric reconstruction run throughout this work and will talk about the transition from research lab to commercial entity. A selection of 3D imaging systems will be presented including work on imaging chronic wounds, textures for video games, Mayan temples, fossils, antique coins and other cultural artefacts. James will also attempt a live demo with Eykona’s latest prototype system!

Materials Science Visualization Day 25th September 2008 CS1.10, Kilburn Building, University of Manchester


Research Computing is hosting an open forum day to show-and-tell high-end processing and visualization software tools that have been developed and used within the University of Manchester. We are also aiming to help guide future development and research.

The schedule and list of topics is at:

If you wish to attend, please respond to Mary McDerby ( ) so we know numbers for lunch.

Paul Debevec University of Southern California Institute for Creative Technologies "New Techniques for Acquiring, Rendering, and Displaying Human Performances" Monday 22nd September 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


This presentation will present recent work in the USC ICT graphics laboratory on acquiring, rendering and displaying photoreal models of people, objects and dynamic performances. It will begin with an overview of image-based lighting techniques for photorealistic compositing and reflectance acquisition techniques (which have been used to create realistic digital actors in films such as Spiderman 2 and Superman Returns). It will present our first Light Stage 6 project combining image-based relighting with free-viewpoint video to capture and render full-body performances, as well as a new 3D face scanning processes that capture high-resolution facial geometry and reflectance from a small number of photographs. It will conclude with a new 3D display that leverages 5,000 frames per second video projection to show auto-stereoscopic, interactive 3D imagery to any number of viewers simultaneously.

Dr. Paul Debevec is the Associate Director of Graphics Research at the University of Southern California's Institute for Creative Technologies (USC ICT) and a Research Associate Professor in USC's Department of Computer Science. His Ph.D. thesis at UC Berkeley presented Façade, an image-based modeling and rendering system for creating photoreal virtual cinematography of architectural scenes from photographs. Using Façade, he led the creation of a photoreal animation of the Berkeley campus for his 1997 film The Campanile Movie whose techniques were later used to create virtual backgrounds for the The Matrix. He went on to demonstrate new image-based lighting techniques in his animations Rendering with Natural Light, Fiat Lux, and The Parthenon. Debevec led the design of HDR Shop, the first high dynamic range image editing program and co-authored the recent book High Dynamic Range Imaging. He received ACM SIGGRAPH's Significant New Researcher Award in 2001 and recently chaired the SIGGRAPH 2007 Computer Animation Festival.

Simon Robinson The Foundry "Visual Algorithms and their Role in Modern Digital Feature Film Production" 4th July 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Based around a light introduction to live-action feature-film compositing, this talk gives an overview into the real-world application of signal-processing in post-production. While human artistry itself is still the hub of 2D digital effects work, I show how it can be greatly assisted today by the use of appropriate algorithms with appropriate implementations. As the film industry is rapidly evolving, I will also identify our favourite research areas for the next generation of tools. No equations will be harmed during this discussion.

Mary Whitton Department of Computer Science, University of North Carolina at Chapel Hill Special post TP.CG session on VEs, Illusions and Presence 11th June 2008, 2-5pm Room 1.10, Kilburn Building The University of Manchester


Virtual Environment Centres (VEs) have now been around for a couple of decades if not longer. They have ranged from CAVEs to portable units, coming in various sizes and shapes as well as being at times ridiculously expensive to compromisingly cheap. As new ideas and faster Computer Graphics processors and projectors emerge certain VEs die of lack of use; but new spaces present themselves. This session includes views from Mary Whitton, University of North Carolina, Roger Hubbold, University of Manchester, Anthony Steed, UCL, and others is an open discussion forum after the Eurographics UK Chapter conference and designed to consider briefly past spaces that VEs have filled and their use, but also consider new spaces that are or may emerging. CREW Annotated Recording of the Presentation

Koji Koyamada Visualization Laboratory, Kyoto University "A stochastic approach for rendering irregular volumes" 3rd April 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


In this talk, we describe a stochastic approach for rendering irregular volume datasets. It is well known that the memory bandwidth consumed by visibility sorting becomes the limiting factor when carrying out volume rendering of large irregular volume datasets. Previous techniques without visibility sorting ignored absorption or emission effect in their optical models. To solve the problem, our technique represents a given irregular volume dataset as a set of opaque, emissive particles whose size is sufficiently small with respect to the pixel size. We applied our proposed technique to a volume composed of about 1G tetrahedral cells to confirm its effectiveness. Our particle-based volume rendering (PBVR) technique has been applied to visualize the source of our dental fricative sound. The dental fricative voice has been analyzed by large scale CFD simulation because the sound thought to be generated from turbulence around the frontal teeth. In the simulation model, the oral cavity shape of the dental fricative was obtained by Cone Beam CT (CBCT) scanner that can take 512 slices by 512x512 pixels. From a volume data which is composed of those slices' images, the oral cavity has been extracted as an isosurface, and 72M hexahedral cells were constructed for the large eddy CFD simulation. The resulting irregular volume dataset is composed of 16 datasets which are results of the distributed CFD computation. Currently, surface-based visualization has been conducted since the currently available volume rendering software can not deal with multiple large hexahedral volume datasets. In these figures, we can easily understand that there were multiple different areas of high pressure values, which has the relations with the sound sources. CREW Annotated Recording of the Presentation

Jan Kautz Department of Computer Science, UCL "Interactive Editing and Modeling of Bidirectional Texture Functions" 14th March 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


While measured Bidirectional Texture Functions (BTFs) enable impressive realism in material appearance, they offer little control, which limits their use for content creation. In this work, we interactively manipulate BTFs and create new BTFs from flat textures. We present an out-of-core approach to manage the size of BTFs and introduce new editing operations that modify the appearance of a material. These tools achieve their full potential when selectively applied to subsets of the BTF through the use of new selection operators. We further analyze the use of our editing operators for the modification of important visual characteristics such as highlights, roughness, and fuzziness. Results compare favorably to the direct alteration of micro-geometry and reflectances of ground-truth synthetic data. CREW Annotated Recording of the Presentation

Neil Gatenby Lightwork Design "Global Illumination for the Masses" 22nd February 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Global Illumination (GI) algorithms came to fruition in the Graphics labs of USA, Europe, Japan, and beyond, during the 1980s and 1990s. The researchers who developed the algorithms had expert knowledge of the underlying physics, and an even more expert knowledge of how their own software behaved (and misbehaved!). Ten years ago, only the most specialised applications contained GI rendering algorithms - those targetted at architects, or automotive manufacturers, or digital imagery for movies/advertising. The number of seats was always small, and the price per seat was always high. Radiosity, ray tracing and photon mapping, final gathering, irradiance caches and the use of MC and QMC importance sampling may all appear on an undergraduate graphics course in 2008, but they are still not the kind of thing one overhears being discussed in the average pub, or cafe! Yet today, it is hard to find AEC or MCAD software that does not contain such algorithms. Many of the pubs and cafes where the algorithms are not discussed contain customers who have kitchen (or bathroom, or garden) design software on their PC/Mac at home. They might not use it very often, nor explore its limits when they do use it, but use it they do. There are many millions of such users, and none of them has paid very much for the software in question. This talk will discuss the difficulties and opportunities that arise when designing GI software for such a market place, and will outline some of the shortcuts and tricks that are commonly employed by those writing the code. CREW Annotated Recording of the Presentation

Frederik Lasage MARCEL Observatory, London School of Economics and Political Sciences, Media and Communications Dept. "Articulating flexibility: Access Grid as a creative tool" 8th February 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Considerable research has been done recently on how designers and engineers can collaborate with artists in order to find innovative uses for information and communication technologies. The Access Grid has been a noteworthy example of this kind of collaboration. Its status as a flexible tool for videoconferencing has enabled many different kinds of technical experimentations. But could this flexibility in itself a limitation? Employing the analogy of the ‘career’ as a means of analysing the design and eventual appropriation of a technology, this presentation will attempt to develop some of the key conventions surrounding the development of the Access Grid as a tool for creative experimentation and collaboration and suggest possible new directions for research in this field. CREW Annotated Recording of the Presentation

Fred Brooks "How Do We Know What to Design? How Linear Can the Design Process Be?" 25th January 2008, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Most engineers seem to have an implicit rational model of the design process, that starts with requirements and objectives, proceeds to a design concept, then gets detailed, delivered, and maintained. Set forth in the 1960s-1970s by Herbert Simon, Pahl & Beitz, and Winston Royce (as the Waterfall Model in software), this model has dominated our thinking and even our formal processes. We will examine what's right with this model and what's wrong with it. I assert that it is not merely difficult, but in fact impossible, to get the requirements for an original system design right before one begins doing the design. We will look at alternatives that have been proposed, ask why an unrealistic model has persisted so long, and ask where do we go from here.

Frederick P. Brooks, Jr., is Kenan Professor of Computer Science at the University of North Carolina at Chapel Hill. He was an architect of the IBM Stretch and Harvest computers. He was Corporate Project Manager for the IBM System/360, including development of the System/360 computer family hardware, and the Operating System/360 software. He founded the UNC Department of Computer Science in 1964 and chaired it for 20 years.

His research has been in computer architecture, software engineering, and interactive 3-D computer graphics ("virtual environments"). His best-known books are The Mythical Man-Month: Essays on Software Engineering (1975, 1995), and Computer Architecture: Concepts and Evolution (with G.A. Blaauw, 1997).

Dr. Brooks has received the ACM Turing Award and is a Distinguished Fellow of the British Computer Society and a Foreign Member of the U.K. Royal Academy of Engineering, and the Netherlands Academy of Arts and Sciences. He is currently a visiting scholar with the University of Cambridge's Rainbow group.

Nick Holliman Durham University "Binocular Imaging Projects in the Durham Visualization Laboratory" 14th December 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Human binocular vision creates the sensation of stereopsis providing us with advantages in interpreting the spatial structure of the real world. The introduction of a growing number of electronic displays capable of presenting an artificial binocular stimulus creates significant new challenges for visualization scientists. These include how to ensure binocular images are always comfortable to view and in what ways the stereoscopic perceived depth effect can be used to convey useful information within a visualization. I will describe current inter-disciplinary research projects at the Durham Visualization Laboratory in binocular imaging, including; collaborations with psychologists to investigate the response of the eye to artificial binocular stimulus, projects with display manufacturers to empirically evaluate display performance, the development of new algorithms for stereoscopic rendering in computer science and conclude by showing stereoscopic images and movies from recent collaborative visualization projects with cosmologists, astronomers, earth scientists and fine artists. CREW Annotated Recording of the Presentation

Biological Sciences Visualization Day 13th December 2007 CS1.10, Kilburn Building, University of Manchester

Research Computing Services in conjunction with the UK AVS+Uniras User Group (UAUUG) and vizNET, is holding a Biological Sciences Visualization Day on Thursday 13th December 2007. Talks and demonstrations will be given by presenters from The University of Manchester, Science and Technology Facilities Council - Rutherford Appleton Laboratory, Visual Technology Services Limited, International AVS Centre, AVS Inc., etc.

Further details and the agenda can be found at:

The meeting will be held in CS1.10 in the Kilburn Building (building 39 on the campus guide) and will start at 10am and finish at 4pm. Attendance is free and lunch will be provided. To confirm attendance, please email:

Dave Shreiner ARM Ltd. "A Survey of Mobile Graphics Technologies" 7th December 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Hardware acceleration for graphics in mobile devices has features and capabilities that rival current desktop systems. This talk will discuss the current trends in graphics systems for embedded and mobile devices, focusing on architectural considerations for performance, power, and capabilities. Additionally, a brief discussion of available programming interfaces will also be included.

William Clocksin Oxford Brookes University "Rapid 3D scene modelling from video clips" 30th November 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


I will describe the VideoTrace system developed in a collaboration between Oxford Brookes University and the University of Adelaide. VideoTrace interactively generates realistic 3D models of objects from video—models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model. Each of the sketching operations in VideoTrace provides an intuitive and powerful means of modelling shape from video, and executes quickly enough to be used interactively. Immediate feedback allows the user to model rapidly those parts of the scene which are of interest and to the level of detail required. The combination of automated and manual reconstruction allows VideoTrace to model parts of the scene not visible, and to succeed in cases where purely automated approaches would fail.

Jason Dykes City University London "Interesting? Aesthetic? Useful? GeoVisualization Perspectives and Examples" 16th November 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Geovisualization involves the graphical depiction and exploration of structured spatio-temporal data sets. A whole series of techniques are being developed to promote this kind of activity. They are fuelled by a number of related trends, including: advances in computing; the increasing availability of formally derived spatial data; the georeferencing of a whole host of diverse data sets including user contributed 'social' data. Situating these techniques in applied contexts and real workflows is less common and developing geovisualization applications through a process that involves establishing needs and requirements is rare. A series of examples of current work at City University will be presented including geographically weighted interactive graphics (geowigs), the 'road/map' application and analytical mashups that use Google Earth as the basis for exploration. They will be considered in terms of their utility and in light of efforts to embed geovisualization in real applications scenarios.

Martin Naef Digital Design Studio, The Glasgow School of Art "Concurrent Projection and Acquisition" 19th October 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


This talk presents a line of work that combines digital projection systems with user acquisition. The blue-c tele-presence system for the first time combined real-time 3D user acquisition with an immersive projection system to enable a full 3D real-time remote avatar. Despite the technical break-through, the original blue-c acquisition system was not widely deployed due to its significant complexity and resource requirements. Instead, the tele-presence idea was further developed by scaling it down into a more affordable system that could be retrofitted to existing spatially immersive displays. Inverting the original problem, the Living Canvas initiative finally aims to restrict projection onto a performer on stage. It brings the concepts and technology from the aforementioned tele-presence systems into a completely new and exciting application domain.

Hamish Carr UCD School of Computer Science and Informatics "(No) More Marching Cubes" Friday 12th October 2007, 2-3pm Room 1:10, Kilburn Building The University of Manchester


Isosurfaces, one of the most fundamental volumetric visualization tools, are commonly rendered using the well-known Marching Cubes cases that approximate contours of trilinearly-interpolated scalar fields. While a complete set of cases has recently been published by Nielson, the formal proof that these cases are the only ones possible and that they are topologically correct is difficult to follow. We present a more straightforward proof of the correctness and completeness of these cases based on a variation of the Dividing Cubes algorithm. Since this proof is based on topological arguments and a divide-and- conquer approach, this also sets the stage for developing tessellation cases for higher-order interpolants and for the quadrilinear interpolant in four dimensions. We also demonstrate that, apart from degenerate cases, Nielson's cases are in fact subsets of two basic configurations of the trilinear interpolant.

Visualization Day and IAC website relaunch 19th September 2007

In collaboration with the UK AVS and Uniras User Group (UAUUG), and AVS Inc., Manchester Visualization Centre hosted the IAC Website relaunch on 19th September 2007.

Amongst the speakers included descriptions of the Parallel Edition and MultiPipe Editions of AVS running on various advanced machine configurations.

Further details of the day can be found at:

Malik Zawwar Hussain Geometric Modelling Group, University of Birmingham and Department of Mathematics, University of the Punjab "Positive Data Visualization using Spline Functions" Friday 31st August 2007, 2-3pm Room 1.10, Kilburn Building, The University of Manchester


Data visualization is an important topic in the field of Computer Graphics. The construction of curves and surfaces from discrete data using an interpolatory scheme is a common requirement. When data arises from scientific observations there is a further need to constrain the interpolatory scheme to ensure the resulting curves and surfaces remain valid. This talk is concerned with data which is always positive. The requirement is a mathematically function which is smooth, preserves the positivity of the data everywhere and is economical to compute. A piecewise rational cubic function, in its most general form, has been utilized for this objective. The method is implemented for a string of discrete data initially and then extended to an interpolating rational bicubic form for the data arranged over a rectangular grid. Simple sufficient conditions are developed on the free parameters in the description of the rational function to ensure positive curves and surfaces.

Ying Liang Ma Image Sciences King's College London "Delaunay based surface extraction algorithms" Friday 8th June 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


A novel multi-resolution and iterative surface extraction algorithm is developed for the visualization of the large and noisy image data. This iterative algorithm generates a series of surface meshes that capture different levels of detail of the underlying structure. At the highest level of detail, the resulting surface mesh generated by our approach uses only about 10% of the triangles in comparison to the marching cube algorithm (MC) even in settings were almost no image noise is present. Our approach also eliminates the so-called 'staircase effect' which voxel based algorithms like the MC are likely to show, particularly if non-uniformly sampled images are processed. Finally, we show how the presented algorithm can be parallelized by subdividing 3-D image space into rectilinear blocks of subimages. As the algorithm scales very well with an increasing number of processors in a multi-threaded setting, this approach is suited to process large image data sets of several gigabytes. Although the presented work is still computationally more expensive than simple voxel based algorithms, it produces fewer surface triangles while capturing the same level of detail, is more robust towards image noise and eliminates the above mentioned 'stair-case' effect in anisotropic settings. These properties make it particularly useful for biomedical applications, where these conditions are often encountered.

Diego Gutierrez "Air, water and light" Friday 18th May 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Light interacting with the medium it traverses gives rise to a variety of interesting effects. Mirages, ghost ships, the blueish-greenish hue of water... all are caused by these interactions which actively affect how light behaves. We will show how some of these effects can be modeled and simulated, in two of the most common scenarios for us: the atmosphere and the ocean. Pretty pictures will replace boring equations as much as possible.

Robert S Laramee Department of Computer Science, Swansea University "The Search for Meaningful Flow" Friday 27th April 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Swirl and tumble motion are two important, common fluid flow patterns from computational fluid dynamics (CFD) simulations typical of automotive engine simulation. We study and visualize swirl and tumble flow using several advanced flow visualization techniques: direct, geometric, texture-based, and feature-based. When illustrating these methods, we describe the relative strengths and weaknesses of each approach across multiple spatlo-temporal domains typical of an engineer's analysis. The result is the most comprehensive, systematic search for swirl and tumble motion ever performed.Based on this investigation we offer perspectives on where and when these techniques are best applied in order to visualize the behavior of swirl and tumble motion.

Molecular Visualization Day 20th April 2007

In collaboration with the UK AVS and Uniras User Group (UAUUG), Manchester Visualization Centre is hosting a Molecular Visualization Day on 20th April 2007.

Amongst the speakers will be invited guests from the Swiss National Supercomputing Centre, Max Planck Institute for Chemical Physics of Solids and Department of Science and Technology, University of Linkoping.

Further details of the day can be found at:

Hans Bjelkhagen Centre for Modern Optics "Super-realistic-looking images based on colour holography" Friday 13th April 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


There is an interest in high-fidelity image recording techniques with perfect colour rendition, which can also accurately capture the three-dimensional shape of an object. State-of-the-art colour holography provides full parallax 3D colour images with a large field of view. A "white" laser beam (combined RGB laser beams) is used to record the holograms. Each of the three primary laser wavelengths forms its individual interference pattern in the emulsion of the holographic plate. Three holographic images (a red, a green, and a blue image) are superimposed on one another. The virtual colour image recorded in a holographic plate represents the most realistic-looking image of an object that can be obtained today. The extensive field of view adds to the illusion of beholding a real object rather than an image of it. In connection with the presentation, colour holograms will be on display.

Caroline Larboulette Universidad Rey Juan Carlos, Madrid "Playing with Colors" Friday 30th March 2007, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Predictive rendering aims at reproducing the exact nature of light and simulated objects to create accurate colors and reflections for still images. One interesting sub-domain is the one of fluorescent colors. We will show a way of reproducing this phenomenon that appears on the atom level by using a macroscopic BRDF description. However, realistic does not mean aesthetic. We will present some recent work on color harmony where Color Order Systems and harmony principles are used to create aesthetic Celtic designs.

Andrew Calway University of Bristol "Robust Real-Time Camera Localisation and Mapping" Friday 23rd February 2007, 2-3pm Room 1.10, Kilburn Building


Significant advances have recently been made in algorithms for real-time estimation of the pose of a moving camera using only visual measurements. There are typically two kinds of scenario. In the general case, no a priori information is available about the structure of the scene, and thus localisation must proceed in tandem with mapping depth values. This is the simultaneous localisation and mapping problem (visual SLAM). In other applications, prior knowledge of scene structure may be available, in the form of CAD or wireframe models, for instance, and these can be utilised to guide camera localisation. In both of these cases, an effective mechanism for robust operation is stochastic filtering, which provides a sound statistical framework for obtaining 'optimal' estimates of the 6-D camera pose and 3-D map. Several systems now exist which can operate in real-time (around 30 fps). In this talk I will describe recent work carried out in these areas at Bristol. We are particularly interested in designing localisation and SLAM algorithms which are robust to effects caused by 'normal' camera use, such as camera shake and visual occlusion. This is a challenging task and many existing algorithms fail in such cases. We have utilised generalised stochastic filtering in the form of particle filters and robust view-invariant feature matching to give algorithms which are able to withstand both severe shake and visual occlusion. This makes them particularly suitable for applications in wearable computing and augmented reality, in which camera movement is often agile and unpredictable. The talk will consist of an overview of stochastic approaches to localisation and SLAM, along with details and examples from our own work.

Marina Bloj Department of Optometry, University of Bradford "Colour accuracy in computer rendering and display: the opinion of a vision scientist" Friday 15th December 2006, 2-3pm Room 1.10, Kilburn Building The University of Manchester


At the Bradford Optometry Colour and Lighting (BOCAL) Lab we use computer rendered and displayed images to study human colour perception including colour memory and colour constancy as well as colour and shape interactions. In this talk I will present recent research from my lab that focuses on establishing colour accuracy at the rendering and display stage and ultimately attempt to answer the question if it all really matters from the human vision point of view. Areas covered will include: spectral rendering vs. 3 colour channels; what is the use of inter-reflections? Are 14-bit channels worth it? How do we display calibrated colour images in a HDR display? As a non-computer scientist I will welcome questions and insights from the audience and their complementary areas of expertise. Dr Marina Bloj, BOCAL- School of Life Sciences, University of Bradford

Erik Reinhard Department of Computer Science, University of Bristol "Image-based Material Editing" Friday, 20th October 2006, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a method for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image as input. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies, while being sensitive to others. By adjusting our simulations to be careful about those aspects to which the human visual system is sensitive, we are for the first time able to demonstrate significant material changes on the basis of a single photograph as input.

Greg Ward BrightSide Technologies "High Dynamic Range Image Capture, Representation, and Display" Friday 22nd September 2006, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Conventional digital imaging is constrained to lie within the limited gamut and dynamic range of a standard CRT monitor, whereas the human eye can see roughly twice the sRGB color gamut and 100 times the dynamic range. LCD monitors are already claiming contrast ratios of over 1000:1, and true high dynamic range displays capable of 10,000:1 with 16-bits/channel are in the works. On the capture side, multiple exposures from a conventional digital camera can be used to create an HDR image, though issues such as image alignment and ghosting must be addressed. The speaker will describe and demonstrate techniques for hand-held HDR image capture using the Photosphere application he developed, and explain the various HDR image formats available along with their strengths and weaknesses. The audience is invited to ask questions on related topics, such as image-based lighting, tone- mapping, and what HDR means for the future.

Ken Perlin New York University "An Overly Broad Talk About Recent Computer Graphics Research" Friday, 25th August 2006, 2-3pm Room 1.10, Kilburn Building The University of Manchester


Professor Perlin will talk about - and show - recent results in Animation, Human Figure Movement, Modelling, Rendering, Display Devices, Data-Capture Devices and Interaction Techniques and Interfaces. He will heroically endeavour to tie together all of these disparate topics into one grand unifying picture.

Details of venue and Access Grid joining are available at:

Associated Events:

Short Seminars

Mike Daw, Will the AG cut the mustard? (28th January 2005)

Darren Edmundson, Bringing VR to the public (28th February 2005)

Martin Turner, Mary McDerby, ESNW Passive Stereo, (3rd March 2005)

Martin Richardson, Spacebomb (11th March 2005)

Karen Grainger, Curating and Collection ... 3D (27th May 2005)

Bob Stone (28th April 2006)

Jonathan Blackledge (26th May 2006)

Aladdin Ayesh (16th June 2006)

Min Chen and David Chisnell (23rd June 2006)

Allan Evans (8th Sep 2006)

Alexei Sourin (22nd Sep 2006)

UAUUG - UK AVS+UNIVAS User Group Meetings


Multidimensional Visualization and its Applications: Parallel Coordinates Tutorial; Alfred Inselberg

Friday 1st September 2006 - 10-12 noon and 2pm-4pm, Kiburn Building Rm1:10 University of Manchester

Book published by Springer, September 2009:

Manchester Computing is pleased to host this tutorial, via support from vizNET, JISC and in conjunction with e-Science North West.

The desire to understand the underlying geometry of multivariate (multidimensional) problems, has motivated several visualization methods to augment our limited 3-Dimensional perception. After a short overview, Parallel Coordinates are introduced and rigorously developed obtaining a one-to-one mapping between subsets of N-space and subsets of 2-space.

This leads to constructions algorithms in N-space involving intersections, proximity, interior point construction, line and plane Topologies useful in approximations and Computer Vision, as well as Collision Avoidance Algorithms for Air Traffic Control. It is a VISUAL Multidimensional Coordinate system.

Applications to Visual and Automatic Data Mining are illustrated with real multivariate datasets (some with hundreds of variables) together with a Decision Support system capable of doing Feasibility, Trade-Off and Sensitivity Analyses in complex multivariate processes.

M4s.jpg This is a helicoid in 3-D and its image in ||-coords (two curves). The two points (one on each curve) represent one of the helicoid's tangent planes. A helicoid in N-Dimensions is represented by N-1 such curves and a tangent plane by N-1 points (on the same horizontal line) one on each curve.

helix1s.jpg This is the same situation for a Moebius strip with the generalization to N-Dimensions (hard to imagine) and yet viewable with ||-coords without loss of information. These are state of the art pictures and hopefully indicate what has been attained. Your comments are welcome.