Digital Dispatches: Prototyping an Interactive Time Line for Harvard GSD 075

The landmark exhibition at Harvard’s GSD culls from 75 years of teaching and practice, showing seminal artifacts in plexi covered cases with vertical annotations done in journalistic style. The challenge put to the design team at INVIVIA was to develop an interactive timeline placed at the entrance to the exhibit that would allow visitors to gain a quick temporal context or an in-depth view of a specific period or set of artifacts. The interactive table had to be intuitive to use, show graphic and textual detail in high resolution, support multiple users and be quite robust. Oh, and it had to be finished and installed in 2 months!

We looked at several options. Using a large touch sensitive table was attractive in that it would allow many users and provide a natural interface that would make it easy to browse subjects in depth.  But the cost and complexity of the large interactive surface seemed prohibitive.  A simpler approach, that quickly became the favorite, used top projection onto a matte white surface (plywood covered with adhesive backed matte vinyl) and multi-blob detection to notice where the users hands were. This approach doesn’t allow you to notice whether the user it touching the surface or above the surface, so selecting/deselecting an object becomes difficult.  So we came up with an interaction scheme using different horizontal zones that had different interaction meanings.  The computer vision challenge is to notice only the user’s hands even though there is a continuously changing projected image on top of them.  This gets solved by flooding the table with IR light and filtering out the white light to the webcam with thick Neutral Density filters.

IR image

The IR image with multi-Blob detection

White-light view

White-light view of table with projection

The multi-Blob tracker was a repurposing of Python code used for many touch table applications, based on Markus Gritsch’s VideoCapture python module (http://videocapture.sourceforge.net/). The vision algorithm starts with capturing a base frame and then subtracting each subsequent frame from it to find what moved on the table. In this difference frame we define a Region of Interest and look for candidate blobs.  These candidate blobs are flooded and culled to remove too small blobs generated by noise in the camera signal. Each real blob is measured and these measures are written to a file. Then this process is repeated for the next frame.

We decided on full HD 1920 by 1080PDLPprojectors because of the high visual quality of theDLPimage, the fact that there are many competing projectors in this market segment which keeps the price down, and the relatively high resolution.

We built a crude but effective prototyping area in INVIVIA’s recently renovated top floor space by supporting a 16ft, 2by12” board between the top of the AC control room and a 10ft ladder mounted on tables. This allowed us to quickly build projector housings out of MDF with a bandsaw and drywall screws and attach them to the board with drywall screws and eventually with sliding supports for fine position adjustments.

These projectors must be operated nearly horizontally or they will overheat so each projector housing had a 45 deg. mirror attached to angle the image toward the table below.

Initially we hoped to use 3 projectors arranged horizontally giving us a 5760 by 1080 total image size, but realized after initial testing this that stretching the 1080pixel image across the 32in dimension of the table would not handle the small type we hoped to annotate the images.   We wound up buying an additional projector, turning each projector 90deg. so that the 1920 pixels stretched across the 32in table and shrunk the table length to 4320 pixels for a ratio of 2.25:1.

Installation view

Installation view of projector/computer tray and table

Projector sled

Projector sled with mirror

The final aspect of the projection setup that needed to be worked out was the IR emitters.  We chose rectangular array emitters normally used outdoors to illuminate parking areas for nighttime surveillance.

The final installation used a well reinforced plywood projector/computer tray hung below a false ceiling.  Openings were cut for the projector light, the IR emitters, and a webcam. Each projector was mounted on a slide that held the 45deg mirror and allowed easy adjustment of the critical space between projectors that almost gave a sense of one continuous image.

Advertisements

CNC Plotter: A platform for DIY Bio/rapid-prototyping/Sculpture_Image experiments

Recently I have finalized and begun prototyping  a long considered design for a device that will allow me to do experiments in several areas, the first being:

  • organic imaging using DIY Bio technologies
  • DIY rapid prototyping, experimenting with application approaches, binders and curing techniques
  • sculpture/image experiments using a router cutting and ink-jet head to both form and image onto 3D surfaces.

In addition one can imagine acquiring a high powered IR laser and beginning to cut plastic and wood for 2.5D rapid prototyping, or attaching a plasma cutter and cutting through sheets of metal with speed and high precision to make architectural elements…. etc.

CNCPlotterAutoCADBasicDesignAxesAnnotation

Over the past several months I’ve been assembling the pieces needed to, finally, start building the prototype.  Here we see the beast with the major structural elements in place and with the X axis functioning, though not driven by a lead screw yet.

CNCPlotterBasicStructureShrunk1

“Hello INVIVIA” Update (8/17/09)

Having installed the ball screws for the X and Y axes and the little lead screw for the Z axis, as a test that the three axes were working I made  a Hello World example, or in this case Hello INVIVIA..   To do this I had to make a little Sharpie pen holder out of four pieces of acrylic and mount these to the simplest connection to the Z axis slide.   If we really want to use the device as a pen plotter it will take a bit more work to devise a way to change pens under computer control and build and interpreter/controller so that it understands the HP Graphical Language which is what pen plotter speak.

GEDC2161


Here‘s a little video of the ‘Hello INVIVIA’ performance…

In order to really begin “using”  the plotter I will need to cover the ball screws to keep dust and other gunk out or they will stop working quickly.  In addition, as you probably notice, there is no ‘work surface’, just a cardboard box with a piece of plywood on top as the drawing surface.  I did this so that the working parts would be exposed for the video, but one of the next tasks is to cover the bottom ball screw with a slotted, layered sheet of MDF (medium density fiberboard) after the bottom ball screw has its protective sheath.  Then the next part I will build will be an adapter to hold a small router on the Z axis so that I can begin to carve stuff…

Motor Testing Prints

Each axis uses a stepping motor driven by a very simple, cheap kit controller card, powered by a scrounged together power supply.  The X and Y axis steppers go through a 4:1 gear(pulley) box to help them drive the considerable weight of the axis elements while the Z axis works well direct drive.   Steppers are virtual magnetic  springs with very high holding torque (in this case 450 oz-ins) but get progressively weaker the faster they go.  So the motor top speed needs to be fine tuned so that you get the fastest action possible without stalling out.   The following test prints show the motors stalling out at different points in the print process.  They resemble an exercise you might give a beginning typography class.

FirstINVIVIATestPrint OffsetINVIVIATestPrint1 VeryOffsetINVIVIATestPrint

UnfinishedINVIVIATestPrint1 StaggeredINVIVIATestPrint OffsetINVIVIATestPrint3

OffsetINVIVIATestPrint2 INVIVIATestPrintOverlap1 HelloINVIVIATestPrint

———————————————————————————————————————–

10/13/2009 update

Phase2 (part1): The Drawing Machine

Getting the plotter to work at all was Phase1, now that I’ve got a better idea of the the range of images one can make when you can move drawing implements accurately in 3-space we’ve entered Phase2 of the the CNC Plotter project.

DrawingTestSketch2Shrunk

This crude sketch for the print I hope to execute sets the context for the experiments.  It combines both raster and vector elements in a way that should both demonstrate the flexibility of the machine and the approach and be a compelling print in its own right.  To date I have only realized the raster portion of the print as you will see demonstrated below.

I began by experimenting with as many different drawing implements as I could get my hands on, starting with ball point pens, pencils, markers, crayons, drafting pens…many of them left over from the days (35 years ago) when I was a printmaker.

If you want a nice fine line a drafting pen or ball point pen works very well, but after a while that tiny line is not as expressive as needed and gets pretty boring…

Big thick graphite pencils have the most CrayonTipConteSquareShrunkcontinuous tonal range but have one fatal flaw that I couldn’t figure out how to get around:  they have a certain amount of memory.  When you push very hard to make a rich black mark, they get roughed up and remember it, and show it by making the subsequent marks in the line darker than they need to be.

Conte Crayons, and their cousins the charcoal crayons, seemed to be the TurningConteSquareTipShrunkbest substitute and have a certain random quality to the mark that actually makes them superior in many ways.  You have to hold the crayon in a holder that can be mounted on the Z axis of the plotter.  I do this by drilling a hole in the tip and hot-melt-gluing a small piece of the square Conte crayon in then machining it down to a diameter that is just slightly wider than the raster that I’ll be using for that image.

CrayonTipMachinedShrunk

InsertingCrayonTipShrunk

CrayonTipHolderVert

InstallingCrayonHolderShrunk

Conte and charcoal crayons have their good qualities but limitations as well.   Conte is very soft, makes a nice black mark but wears down so quickly, because it is so soft, that it if you make a tip long enough to last through a whole print it will break off.   Charcoal is much harder, will last much longer, but doesn’t make really dark marks, shown in the left image, below.  To get a full toned print, I decided to use a two-pass or duo-tone printing sequence.  I start with an image of just the shadows (middle image) and finish by printing with the harder charcoal over it.  Together they produce a rich, full toned image.

TooLightCharcoalShrunk

ShadowsOnlyShrunkTwoPassShrunk

I made a crude little video that shows the two step process.  Enjoy!

DrawingMachineTitleShrunk

Part2 (next steps)

In order to finish the print I need to build some way of :

  • twisting, scraping at an angle, larger pieces of charcoal
  • holding a brush, probably at an angle, dipping it in paint
  • holding an airbrush,
  • etc…

This means I need to add another motor and controller to the system and hope that my laptop and Mach3 CAM software will let me control it.   We’ll see…

———————————————————————————————————————–

12/10/09 Update

Phase3: The Carving Machine



Several years ago I had a chance to borrow an amazing device; the Polhemus Scorpion laser scanner

that captures the 3D position of a scanning laser beam and orients the beam relative to the the scanner using a magnetic tracking device built into the scanner.  Allen was a patient subject and I captured this scan of him.   Notice that hair such as eyebrows, mustache and beard and shadows don’t fill in, leaving a hole in the cloud of points that will have to be dealt with later.   This cloud can be converted into a mesh model using Polhemus’ FastSCAN software as you see here.    Since I wanted to cut this surface using the plotter I needed to convert the mesh into a set of machine instructions. For this I used the very excellent MeshCAM application which analyzed the model and produced a set of toolpaths for the several different cutters needed to shape the insulating foam I used.   For the cutter I re-purposed a UNIMAT-SL mini lathe/mill that my father game me long ago by machining a mount to hold it firmly on the Z axis and another mount to reverse the motor away from its original position parallel to the cutting head.  I chose this mini mill instead of the standard router because it had a keyed chuck and collet that would allow me to use a wide range of cutters.  Routers tend to limit the cutters to router bits of a certain type.

The final model turned out to be 8″ x 13.5″ x 5.6″ so I decided to use 1″ insulating foam because the foam cell size is quite small and uniform, the material is cheap and cuts nicely.  The downside to the choice is that I had to glue 6 – 8″ x 13.5″ pieces together and my choice of superglue, then when that failed Titebond, caused me lots of grief. I’m hoping that the foam glue I tested will work out for the next carving.   Notice that I needed two laptops for this project: one to control the plotter and do the cutting, the other to capture sound and a frame every second from a webcam to record the process.

The cutting sequence starts with roughing where much of the unwanted material is removed. One imagines the roughing process was invented by a frustrated cubist from the look of the result of the roughing process composed by MeshCAM.      The next steps begin to reveal the form of the object to be uncovered (discovered?) within the raw material.  Each pass uses a finer cutter, cutting a little deeper and revealing more of the detail of the original model. Problems with control of each axis were revealed at different points in the 7 step process.  Solving these meant discovering faulty electrical connections, bad bearing alignments, etc… Tuning the Z axis so that it doesn’t lose steps either going up or down has been a challenge.  A video of the complete process can be found here.

Next Steps

Before changing direction again I feel I should refine the carving process so that I can carve a form with the minimum of technical interruptions from the plotter.  I would welcome input from the INVIVIA community, but my guess is that next should be a rapid-prototyper attachment for the Z axis that uses ABS or other heat formable plastic.  That feels like the right additive complement to the subtractive carving approach shown here…

———————————————————————————————————————-

2/28/10 Update

Phase4: The Extrusion Machine

Fused Deposition Modeling (FDM), the process used in my new extrusion head was invented in the late 1980’s for depositing successive layers using a CNC platform that either cool or catalyze into a hard, often usable, object.   The design I used in this extruder is a variation on the extruder design laid out at RepRap.org – a wonderful group of folks who have come up with a very simple and easy to build design for a machine intended to replicate itself.

An extruder is not much more than a device to push a thin rod of thermoplastic into a heated chamber at a controlled rate.  The chamber is a  brass rod with a hole a little bigger than the plastic rod and a NiCr wire wrapped around it to keep the chamber at 230-260DegC, or as hot as most cooking ovens.   In order to control the temperature, a sensor such as a thermistor or thermocouple is attached to the end of the rod below the heated wire, used by a closed loop PWM heater circuit.    At left the heated chamber is below, covered by heat resistant orange Kapton tape.

A top view of the extruder shows my slight variation of the RepRap approach.  Given that the extruded filament is about 1/6th the size of the plastic rod you need to push the rod 1/6th as fast as the plotter travels which turns out to be too slow (on my big slow plotter) to use a direct drive from a 200 step/rev stepper.  My answer was to drive a hand made worm gear directly from a piece of 1/4-20 threaded rod attached to the stepper.   The plastic rod is held firmly against this worm gear with a spring loaded roller bearing.

I quickly discovered that standard FDM/rapid-prototyping approach of extruding a very small filament and filling areas with a quick zig-zag pattern was not going to work for me since my machine is so much slower than the much smaller typical rapid-prototyper.  I found that by extruding a much larger filament I could create single walled structures that stood on their own  and did quite nicely, thank you…  but what this meant was that I couldn’t use the readily available FDM software unless I rewrote it (which I might do later, much later…).  So instead I came up with simple, just a few line, g-code programs that move hexagon layers around  and let me test the many variables that need to be controlled to make a successful extruded part.   At the left is one of the first 3 straight line programs that helped me work out extrusion rate and the impact of plotter backlash.  The twisted tube on the right shows what happens when the feed rate on the extruder changes over time producing some layers that are skimpy, some that are fat or bumpy.   I think I’ve fixed this skipping extruder feed problem.

Once I began to get control over extrusion rate I decided to start playing around with layer thickness.  The shape on the left uses a layer of .030″, the one on the right uses .010″, both use the same data otherwise.   The piece on the left followed the all black twisted hexagon on the table behind.  I began the piece by switching from black ABS plastic welding rod to white ABS plastic welding rod and you see that the chamber slowly mixed the two colors a finally expelled all the black, although the object was almost halfway finished.

These masks trace the saga of control of extrusion rate and extrusion temperature.  The mask on the left was made as the first variation of worm gear drive was failing.  The extrusion rate was set high and when it was working produced the bulging, disease like layers.  I remade the worm gear feed and the middle mask shows that indeed the extrusion rate was much more controlled, though I began to experience lack of adhesion between layers and the mask split as it cooled.  In the rightmost mask I used a model with much deeper holes and larger overhangs which caused unsupported layers to droop so I used an air cooling attachment to keep the drooping down and that further decreased the adhesion between layers.

After more experimentation with materials,  extrusion speeds, temperature, backing up and advancing the extruder in an attempt to try to break the filament and create a hole,  I  reworked the model to have really deep holes and used a much higher extrusion rate and temperature and succeeded in keeping the layers together and creating a mask with eye and eyebrow openings after cutting off the extraneous filaments.

I created a little video of a few of these pieces being built here.

———————————————————————————————————————–

5/2/2010 Update:

3D Scanning/Extrusion


The big issue in the earlier Extrusion update was what to print.  I had a small set of older scans made with the Polhemus Scorpion laser scanner that I had on loan for an earlier project and no real way to make new scans so I had to invent forms in g-code to print as a way of testing the extrusion head.  In this update I describe the use of the very excellent David Laser Scanner software and the ways I have found to create files that can be printed on my Extruder.

The Scanning Process

As you see in the left image, the David scanning system (http://www.david-laserscanner.com) in its simplest form uses a printed background pattern mounted on 90 deg. angled boards. The software controls a webcam whose optics are calibrated using just the background pattern.   During the object scan the software is looking for a thin, straight laser line slowly scanned across the object at just the right angle.  The laser line is produced by a long range bar code scanner set to continuous scan.  I built a tripod mount of acrylic  with a Velcro keeper and mounted the tripod head on a vertical slide that allows me to adjust the height quickly. Vertical scanning is accomplished when I put a weight on the handle and adjust the tension on the tilt head for a slow, easy traverse. Too rapid a scan and you miss areas of the object.

The David software provides a couple of views, one showing what the camera sees, another shows the current state of the 3D model with the depth coded using a color map.  David is very picky about the quality of the lines cast onto the background and will warn you if something is amiss, making the camera view very useful for diagnosing laser line problems. The 3D model view tells you if the scan is going well and you are gathering data continuously.  If not, typically the best thing to do is erase and start again after adjusting the laser/tripod arrangement. Trying to fill in missing parts of the scan often seems to cause ridges to develop in the model.

Read more of this post

Flush Touch Screen Working Prototype, Version 8

Flush Touch Screen working prototype

So obvious question is; whats different from version 7. The answer is not much in that it uses the design articulated in verion 7. But this is not a test, it is a real working prototype, with all the machining done to specifications. It has a finished rectangular LED Guide, inset into a plywood version of the MDF table material that will be used in the real INVIVIA conference table version. And, I might add, done on a DIY CNC milling machine by a guy with a walker…(‘cuz the idiot broke his hip skiing!)

Flush Touch Screen LED Guideback of Flush Touch Screen LED Guide inset into \ Done by a guy with a walker

There is one important difference with version 7 test. Here I have only had to use one line of LEDs running along the top of the LED Guide. That this would be enough was a pleasant surprise, and it bodes well for using it in an environment where there is more ambient IR light that would force me to use LEDs on all sides to compete.

Here’s video of demo shown above and video from camera’s view

[The blob tracker is still jittery and sometimes looses the blob position, even though the video image is solid. I worked for about an hour and found the place in the python code where this is happening, but couldn’t find a way to keep python from forgetting what it should do. This will take some work to fix, but is very fixable.]

[The melamine screen edge in this prototype version was cut into squares and taped to the plywood guide holder as a way of conserving the one sheet of melamine I plan to use for the real table. In the real table there will be a perfect hole that the screen sits flush into.]

Next Steps:

Lenseless Projector, no mirrors approach:

The big question, as I see it, is what kind of projector to use which determines how the table will look. I think there are two options. The simplest and initially most expensive one, and the one I believe will produce the best looking table uses the NECWT610e lenseless projector that was demo’d to us at INVIVIA.

This option is different from all the alternatives to follow in that there is no additional large mirror to mount under the table. The folding of the optical path is all done in the one projector. The big downside, beside the cost of the projector, is that the projector needs to be oriented vertically which will decrease the bulb life (but maybe not by much). This is an unknown we will need to deal with if we choose this options. (and sorry for the crude representation of the projector, but I think the massing and position is correct)

Conventional Projector, one or more mirrors approach:

The other approach uses a conventional projector, 1/3 the cost, but since the throw distance is so much longer must use two mirrors mounted under the table. This version, really just a klugy repurposing of an earlier design, folds the optics so that we get the shortest table, but what that does is put the mirrors down near the floor… maybe not the best design for the conference room table. Biggest downside to this design is that it would be very difficult to sit on either side of this table.

Here’s another two mirror conventional projector mirror arrangement. It is much less deep but not really as simple as it can be. We would choose this version if people had to sit on all sides of the table, as it gives a reasonable amount of leg room all around.

This final version turns the projector upside down, mounts it to the underside of the table at the very end, where these is just enough room to project an image large enough to fill the one, medium sized mirror, right below the screen. The mirror is held in place under the table by two clear polycarb supports that connect cross members with mirror position adjusting screws. Only downside I can see is that the projector is at the end of the table, making that side impossible to sit at. Otherwise this is the second simplest to build, with the NECWT projector version being the simplest.

here it is seen from slightly below. The mirror rests against the 3 angle adjusting screws.

Currently, I am working out the best way to cut a perfect (+/-.003″) rectangular hole into the top melamine sheet.

Further Next Steps Thoughts:

6/7/08->

Having become more informed about MS Surface, INVIVIA would be crazy not to differentiate what we are doing as much as possible from the current Surface direction. To this end I have found another projector type, an ultra short throw conventionally lensed 2000 lumen DLP projector that allows us to fill nearly the whole conference table surface with bright projected image using one mirror. The projector is the Toshiba TDP-EW25U Conference Room Projector.

Here we see the table from the side showing the light fan (imagine the room filled with smoke) which shows how the projected image approaches and bounces off the mirror. This arrangement has a projector throw distance of just 26 inches, projecting an image 26 by 44 inches (50 inch diagonal). The projector has a native resolution of 1280 by 800 pixels.

The projector and mirror need to be protected from the sides and this is done with long pieces of frosted plexi attached to legs.

This size and shape would allow 4-6 people to work comfortably around the table, each with enough real estate to not feel squeezed. Given the 2000 lumens and the fact that the projector uses DLP the image should be bright, sharp and nice contrast. We need to see whether 1280 by 800 is enough resolution for that size.

Dual Use Version:

7/3/’08

Following up on a suggestion by Daniel Spann that it might be really nice to use the touch table in a (near) vertical orientation, I reworked the optical path, added a mirror and projector cowl and added a hinge so that the table can now be used in either the horizontal or (near) vertical orientations.

Lifting the back of the table swings it to the near vertical orientation.

Of course there are many specific details to be worked out, but I think this concept sketch gives an idea of one way to make a very flexible prototype for exploring more uses of the touch table approach.

The projection surface shown here is ~ 22 by 35in (41in diagonal), set in a 28 by 41in table top. This is the smallest image the projector is made to deliver without modification. I think it may be possible to demount the lens and add a short lens extender tube to make projecting a smaller image possible.

Glowing Breath

While doing tests to understand the last subtleties (I hope) of the shape of the touch screen LED guide, I noticed that when I held my hand close to the screen, but not quite touching it, the moisture from my fingers condensed on the screen leaving a glowing area even after I moved my hand away. Then I breathed onto the screen….

glowingBreath

(video)

It’s not a stunning revelation, but I thought it was a neat, ephemeral phenomena worthy of sharing.. something that might spark an idea for an artwork or interaction piece.

And while you’re at it.. have a great New Year!

cheers, ronmac

Flush Touch Screen, Version 7

At our last meeting we determined that big plastic touch screen frame bezels were out, that inserting the touch screen directly into the melamine-MDF table surface was the way to go. In further comparison testing of the different screen material options, I found that the thinner the touch screen material the better the FTIR effect. I now have a design that shows the glowing finger with the minimum of pressure across the entire surface, as well as eliminate all “bezelness”. I also had the good fortune to find a supplier of the same “Cherry Veneer” melamine surface used on the conference tables.

flushtsfrontviewpointingatdotshrunk.jpg

flushtstableviewshrunk.jpg

The challenge was to get the maximum amount of light into the edge of the touch screen. The approach I took with version 5, with the 1/4″ touch screen, was to put the LED on axis with the center of the screen. But in the case of this .107″ screen, there simply is not the room to do this. The LED must be at an angle to the axis of the screen, but the angle must be as shallow as possible to keep the light inside the screen, rather than spilling out onto the projection surface.

flushtsledguideautocadsideviewshrunk.jpg

In this version we expose the LED as much as possible and cover the channel with reflecting mylar tape to direct the light into the screen edge.

flushtsledguideuserssideshrunk.jpg

here’s edge view from below the screen

flushtsledsideviewfrombelowshrunk.jpg

and finally a little video of what the IR camera sees. The small dots are where the finger is lightly touching the screen, bigger dots show more pressure.

flushtscameraview.jpg

Z-Depth Touch Table Proposal

Recent INVIVIA brainstorms and discussions have pointed to the desirability of adding Z-Depth to the Touch Table idea. The least obtrusive way to do this would be to illuminate the hand from below the screen and use stereo cameras situated there as well. I’ve made numerous tests of this approach using different screen materials and illuminator positions.

What I’ve learned is that the ideal screen material would be a great diffuser of the white projector light and yet be transparent to the IR light illuminating the hand and provide a clear view for the IR camera. I have not found such a material. What I find instead is that a terrific screen material is almost opaque to the IR source and IR camera, and a material that functions well for the IR camera is a terrible projector screen.

To illustrate the problem I used frosted mylar, the material that is the closest I could find to fulfilling both criteria: allowing the cameras to see through somewhat and still provide an adequate projector screen.

So here’s what the camera saw, with the hand touching, looking at the underside of the frosted mylar illuminated by a couple of rows of IR LEDs (MS Surface works something like this, I assume)

zdepthhandilluminationcameraviewlarge.jpg

When the hand is slowly pulled away from the mylar (here at 1inch increments) the diffusion of the image (very good for the projection) blurs the hand so that at 4inches away it is no longer recognizable, not by human eyes and certainly not by a video camera.

zdepthcameraviewhandilluminationseq.jpg

 

If all we needed was 2inches of hand travel we could probably make a frosted mylar screen work.

Looking back at the ZDepth stereo demo from last year is instructive. It shows that you need at least 8-10 inches of depth of movement in order not to severely constrain the user. (the cameras have the blue lights and are located above the screen)

zdepthdemoclosefarexamples.jpg

 

Here’s what the stereo cameras see:

pythonhandfollowerstereoviewnearfar.jpg

 

The Proposal:

 

Until we find the magical screen material which diffuses in white light and is transparent in IR, I propose that we adapt the approach I used successfully in the Hand Follower demo above to the Touch Table environment.

My idea is that we build (or at least think hard about building) a Z-Depth module that is simply a small wide angle USB video camera, a stacked and angled fan of IR LEDs, and a housing to protect them (and in this case hold the connector to get the signals into the table.)

zdepthtopflatfantogether.jpg

Here are other housing variations:

If we can’t find a suitable small USB camera, this housing is big enough to cover the existing Logitech version

zdepthsideflatfantogether.jpg

And, of course, there is the obligatory Egg variation:

zdeptheggfantogether.jpg

…and this final (not very well crafted) variation where the parts are partially hidden in the 1inch table depth and clunky plastic parts are traded away for lots more veneer…

zdepthtableinsetledfanscameratogether.jpg

….obviously, ideas for other variations (or changes to these variations) are welcome….

 

Solving the stray window light problem:

When you look again at the vertical ZDepth demo pictures you realize that they were made in a pretty controlled environment where stray light was easily controlled. The INVIVIA conference room is another matter entirely.

My living room has windows big enough to offer a similar experimental environment for testing. The problem that has to be dealt with is that at least one of the cameras on the table corner will likely be looking across the table to a well lit window. Blacking out the window entirely is not an option. What is needed is a way to modulate the IR light coming through the window without drastically changing the white light amount or color.

What I found was a family of heat absorbing commercial window films, intended to keep heat out during the summer and in during the winter that do the job pretty well.

windowfilmtestsmall.jpg

 

Here you see the video camera (blue light to the right of the screen) looking at my hand and the window, to the left of the screen is the IR Illuminator. In the video capture window on the screen the only things visible are my hand and the source, even though the bright window in the background is in view…

Apparently there are non reflective versions of this film that would decrease the surface crinkly effect on the film.. That version was not available in Home Depot…

 

 

 

 

 

Flush Touch Screen, version 5


The version 3 Touch Screen’s geometry was greatly determined by the thickness of UV coated polycarbonate sheet I had found at HomeDepot that seemed to do the best job of aiding the FTIR effect as well as providing smooth finger glide across the screen. Since the sheet was .095″ thick and the LEDs are .2″ in diameter LED guide had to have a substantial bevel in order to align the LED and the screen.

I have since found thicker UV coated plastic after a short web search and determined after a couple of quick prototypes using existing simple LED guides that the thicker sheet did in fact exhibit satisfactory FTIR qualities. It seemed right to design a flush touch table LED guide that would allow us to inset a touch screen in an existing table, mount the projector and camera underneath and begin in-house touch table experiments.

Here is the AutoCAD side view of the LED guide:

flushtouchtableautocadsideview2shrunk.jpg

Since this is just a (quick and dirty) proof of concept prototype, I only machined enough of an LED guide to hold 18 LEDs or about 5.1″ worth of LED, (twice, one for each side)…

flushtouchtableprotoshrunk.jpg

A closer view of the flush edge LED guide:

flushtouchtableframecloseupshrunk.jpg

The side view, seen if screen extends to edge of table…

flushtouchtabletestimage.jpg

All this assuming that we might end up with a table that looks something like this:

flushtouchtableblackedgesideshrunk.jpg

Anyway, I will bring both the beveled and flush prototypes in to evaluate…

Inside the LED Guide

Just to make whats happening with the LED guide a little clearer, here’s the underside of the guide with the LED PCB board in the background. Note that the LEDs have to turn the corner in this version.

flushtouchtableledguideundershrunk.jpg

Here the LED PCB has been put in place (with some difficulty, ’cause there’s not much room in there)

flushtouchtableledguideunderpcbshrunk.jpg

Finally, a shot looking from the side the touch screen and video screen see. I discovered after this version 4 proto was made that the holes need to be deeper in order to mitigate the strong side illumination lobes from the LEDs. (Sadly I didn’t get a picture of the version 5 deep holes until the whole thing was taped together)..

flushtouchtableledguideholesshrunk.jpg