Matt Parker helped me debug my code today. I orginally had a switch talking to arduino, and the arduino would take the picture. Since I had the switch being checked in the loop() function (the main function of arduino, in which it infinitely loops), it was looping a high value to the relay. This means that I was triggering the switch for maybe one quarter of a second, whereas using just the code would trigger for 1/1000th of a second (not actual time, just to explain).
We both thought that something was wrong in the code and it would never fire to the camera. Matt used the Serial.print() inside of the conditional, so we could at least see if the conditions were being satisfied. They were. We immediately realized that the impulse just was not long enough. Whew. Thanks Matt. Thanks to Tym Twillman for help debugging the rotary impulses too – apparently there is noise in the first few milliseconds when the voltage rises to a HIGH value. By putting in a delay before counting transitions, I am able to avoid intermittent connections and which would produce an error in counting.
Here’s a video of the camera triggered by rotary dial.
I think the mic is broken on the Canon G7 I’ve been using. It’s super low volume. Anywho, after frying 500mA relays on the external flash, I got the remote flash trigger working through a 5 volt 1 amp relay.
I was working on soldering relays to old point and shoots. I ruined the first camera, a 5-megapixel HP when I accidentally shorted the flash capacitor another circuit (while trying to discharge the capacitor so I wouldn’t get shocked anymore). The second camera, Tedb0t was nice enough to donate, wasn’t proving to be much more fruitful. I soldered wires to the 4 points of the shutter contacts on the circuit board but when trying to use the arduino power, it wouldn’t work. I de-soldered the wires assuming that my solder job was lacking, and in doing so, I cleaned off one of the solder points as well. Andy Miller told me I can find another area to solder to (using the continuity tester).
My nerves started up after spending a great deal of time with no progress. I knew there was an instructable on using a graphing calculator to control a Canon Rebel XT for time-lapse. Steve Litt looked at me, “wait… if you know how to do it… get something working first, then play around later.”
So that’s what I’ve been doing. Here are videos (despite the crap audio) of a relay controlling the shutter on the rebel and an external flash. When I get the 2.5 to 2.5mm TRS adapter from Radioshack tomorrow, I will have independent control of the auto-focus feature and the shutter.
I am also contemplating two flashes. One regular flash and one with a user-modified color setting.
Marianne showed our class how to navigate through After Effects last week. Here’s my first animation using the program. Thanks to Steve Litt for the the inspirational Brainticket albums. I used their “Era of Technology” for this piece.
Spent all day Saturday teaching myself Final Cut. At least there was some great footage to have fun with. Once Mustafa got back from his other assignment, it was easier to figure out the sequences. We had to get crafty because we really didn’t have as many takes as I wanted, so sometimes reversing clips or splicing sound files helped linearity.
THE JERKS AT BLIP DELETED MY ACCOUNT. WILL HAVE TO FIND AND REUPLOAD.
As I mentioned earlier, my PComp final will be addressing new uses for payphones. This is to make useful an antiquated service, as most people use their cell phone or VoIP on a wifi connection. How can we implement a new use for phonebooths that wouldn’t interfere with its functionality but expand it?
As entertaining as the urine deterrent may be, I’m afraid that it wouldn’t be of much consequence. It might even perpetuate its use as a urinal, adding a celebrity status to a drunken piss. I had the idea to make a broadband photobooth. This would put all of the hardware into better use, making a novelty out of the mundane.
I spoke to many ITPers, faculty and students, and although it’s straight-forward, it’ll take sometime to perfect since there are so many steps to the process. I’m sure as I move along, the process with be refined.
The user will deposit coins (it is a PAYphone), and dial the number stencilled/stickered. I’ll have an asterisk account setup to verify the payphone’s number. Asterisk will then send a HIGH value to the circuit that is soldered to a cannabalised digital camera. The camera, equipped with a wifi enable SD card (eye-fi), will take ~4 pictures, and upload to a flickr account I’ll setup.
I will make a cardboard prototype, but I do have one of the phonebooths in front of our building in mind, since we have flatscreens in the windows. Ideally, one of these screens will be streaming the flickr slideshow. As far as content editting goes, maybe there’ll be a voting system that’ll screen what makes it to the screen.
I am also interested in writing a patch to take pictures every 10 seconds or so when someone uses the phone. This would be an interesting, to either determine what type of call was made by body language or emotional response. A time-lapse photography would also be a nice exposé, haha I can see it now, “Lonely Technology.”
Juri & I teamed up for the audio assignment in Comm Lab. We were inspired by the mashup videos we watched in class, so we wanted to learn how to make one from footage we enjoyed.
Never having used Final Cut, I flocked to Ableton Live, since Winslow created a pretty cool break beat vs. his friend talking.
There were many problems we encountered across the lines of codecs, compatibility, and finally aspect ratio. The final video is still stretched since I could not figure out how to change the source file linked to the edits in Live.
Here is the stretched, low-res version, until I get a response from Ableton Support.
Having abandoned my initial project, I was pretty worried about falling behind, I met with Kate to express my worries and work around my creative-block. The problem with the ideas I’d come up with was that they were either to simple/mundane/not interactive enough or they were too ambitious and unlikely to be finished by the end of the semester. She evaluated the weaknesses of each of these ideas to help me consider the ingredients of a good idea. Another piece I took away was to think of something that is already a part of my life, something I am already aquainted with, a problem I’d like to solve.
This reminded me of my fellow classmate, Michelle Mayer, who’s designing a light installation to resolve a junkie-hangout in her neighborhood. It’s a great project because it benefits the community, not just herself.
In retrospect I remember making a hyperbole along the lines of “phonebooths are used like public toilets, if at all” during a conversation about kiosk culture in Pete Menderson’s Materials class. That being said, how could I solve / react to this misuse of public utility? The first idea that came to mind is to embarass the miterater, if you will, by projecting his face on the wall behind him. He would undergo social repurcussions that he obviously sought to protect when he used the phonebooth to conceal his face and genitalia. This could also tip-off any police officers that would not notice a shadowy figure, as opposed to a brilliant, large-scale projection.
The pee switch. There are two pieces of sheet metal on the ground, as an open circuit (low-voltage), if someone urinates onto the metal, the resultant puddle/stream would close the circuit, which an IC would detect, turn on the webcam/digital camera concealed inside the phonebooth, which is transmitting to an LCD in the window behind him (alternative to a projector that would take time to start up).
I decided to ditch the magnet-fluid. Chris Cerrito and a couple other second years explained their pitfalls with ferrofluid last year. He said that most of the videos online were deceptively scaled, and that many of the results were results in millimeters. I’m sure blasting an EM with high voltage would surely make the fluid stand a decent height.
After much research, I realize that the ferrofluid has been done time and time again, and fabrication would take up most, if not all, of the time alotted, but the assignment is for interactivity, not for simple controls.
I had a similar idea to have a square container of water, that I could spout black ink that is magnetically responsive. When ink flows through water, it makes a beautiful smoke-like effect. Ideally, the user would take a stylus with a strongly magnetized tip, and the user could affect the ink paths with the stylus, and essentially draw in the 3D. It’s a combination of uncontrollable variables that is appealing. The artist typically wants to control as many factors as he or she can, but in this instance, the entropic nature will elicit a natural aesthetic to the user’s unnatural movements.
A year or two ago, I watched a video stream in which someone puts salt in a Ziploc bag and shakes the contents. The salt started spiralling and bonding with itself, I want to say even floating. I tried iodide salt, kosher salt, sea salt, with air, without, but to no avail. I sifted through email, youtube and all the avenues by which I might find it’s link. Three hours later, no floating salt, but a few videos caught my attention. “Non-Newtonian” Fluid (it’s friggin’ cornstarch + water) and Ferrofluid.
I want to create some sort of intuitive interaction between user and something very graceful with an organic appearance. I want to stimulate curiosity with the natural world that many forget when living in an environment almost entirely fabricated.
Here are two examples by Sachiko Kodama and Martin Frey of great manipulation of Ferrofluid.
I imagine something similar to Sachiko’s use, where I build a nice 3D shape embedded with electromagnets. Multiple ultrasound sensors or the use of a webcam would map hand movements, that would control the voltage sent to the electromagnets, thereby creating intuitive results.
Obviously, it would not look as nice as Sachiko’s piece, but it would be interactive, whereas her piece is a simple pot to ramp up/down the magnetism. I know the oil stains, she uses Teflon-coated metal – probably so it’s easy to clean, since it still stains.
Cameron Kunduff and my stop motion. Unfortunately, iStop Motion bombed during the shoot. Despite all of the cmd+s keystrokes, half the footage was lost. We shot a few scenes after the loss, but instead of going through all of the frustration of reshooting the same footage, we chose to have fun with the foley.
I’m still a bit bummed about the pitfalls – Cam & I have a very similar sense of humor, so the hardest thing was staying focused. There were some remarkable symbiotic digressions, so another collabo might be an order.
Seriously frustrating. I understand them, and yet, they didn’t want to work. We wanted to use transistors to open & close our electromagnet circuits, but when we sent a HIGH value from our digital pinout, no current was flowing into our electromagnets. We then tried plugging in a simple LED where the lightbulb or EM would’ve been, with a resistor of course, but to no avail.
We switched breadboards, transistors (3 or 4!), all leads, pwoer supplies, power adaptors. Kate mentioned testing the digital pinout by running it through an LED to ground. So that’s first on our list when we get back to it.
Transistors are the last piece of our midterm puzzle, so expect an update soon. I’ll upload some photos when we get it workin’. Hmm maybe I’ll H-Bridge it in the meantime.
I’m still running on fumes, but I know that there’s no getting up tomorrow morning. I don’t have powerpoint installed on this new(er) laptop, but the presentation will be up in the next day or so.
I spoke to Dan Shiffman today about further customizing his brightness mirror patch. We wanted to have a mouse & keypressed function to freezeframe, which ordinarily we would use a pause() command, but since the Capture class doesn’t have said f(x), Dan suggested we save the frame as a PImage file that we can use to fake a pause.
Once we can get the transistors working, we can finally test to see how our code runs. This has been an odd way to code, not having a working apparatus to test algorithms against.
It’s 8:08 AM… yes people, 808. and i did consume 8+8 caffeinated beverages over the course of the last 22 hours I’ve been P-Comping. I’m not great on documentation after I crash out, but here’s some Arduino code.
int incomingByte = 0;
//servo variables
int servoPin = 2; // Control pin for servo motor
int minPulse = 500; // Minimum servo position
int maxPulse = 2500; // Maximum servo position
int pulse = 0; // Amount to pulse the servo
int increment = 50; //there are 40 gear teeth of planar movement
//the servo has 2000 explicit values
//this means 1 tooth is our greatest resolution of movement
//ergo, pulse50 = 1 tooth
//so if we move 5 times/sec, which is 250pulse/sec
//the total render time is 8 seconds
int movePerSec = 5; // how quickly we are rendering the model
int delayTime = 10000 / movePerSec; //determines proper delay
//pixels
int incomingBytes[9]={255,255,255,255,255,255,255,255,255}; //set all pins to white
int currentPinVal[9];
int pixelPin=9; //set pixel pin
long lastPulse = 0; // the time in milliseconds of the last pulse
int refreshTime = 20; // the time needed in between pulses
int analogValue = 0; // the value returned from the analog sensor
int analogPin = 0; // the analog pin that the sensor’s on
void setup(){
pinMode(servoPin, OUTPUT); // Set servo pin as an output pin
pinMode(pixelPin, OUTPUT);
pulse = minPulse; // Set the motor position value to the minimum
Serial.begin(9600);
}
void loop(){
if(Serial.available() > 0)
{
incomingBytes[8]=Serial.read(); //just read the 9th pixel
//get values & map to usable servo data
for(i=0;i<9;i++)
{
//incomingBytes[i]=Serial.read(); //get pixel values from Processing
currentPinVal[i] = map(incomingBytes[i], 0, 255, 500, 2500); //change brightness value to depth
Serial.print(“I received: “);
Serial.println(incomingBytes[i]);
}
//place servo at first position
digitalWrite(servoPin, HIGH); // Turn the motor on
delayMicroseconds(pulse); // Length of the pulse sets the motor position
digitalWrite(servoPin, LOW); // Turn the motor off
lastPulse = millis(); // save the time of the last pulse
}
//delay(100);
//pulse = map(incomingByte, 0, 255, minPulse, maxPulse); //changing brightness value to servo position
// pulse the servo again if rhe refresh time (20 ms) have passed:
//TO ADD: conditional servo reset to 500 & resetting magnets back on
// also, check if all magnets are off
if (millis() – lastPulse >= refreshTime)
{
if(pulse >= currentPinVal[0])//if we are have passed the pin’s value
{
digitalWrite(pixelPin, LOW)//turn off magnet through transistor
}
else
{
digitalWrite(servoPin, HIGH); // Turn the motor on
delayMicroseconds(pulse); // Length of the pulse sets the motor position
digitalWrite(servoPin, LOW); // Turn the motor off
lastPulse = millis(); // save the time of the last pulse
We’ve come a long way this week, but let me explain from where we were a few days ago.
The idea was to create a way that a person can comfort another, in a way that audio and video don’t achieve. Ideally, it would take a 3-dimensional impression of your hand (or whatever you put on it, the cat?), process the values that would then be sent to your chat buddy’s computer, then rendered to their 3d surface. This means a soldier can understand the size of his newborn’s foot and people can literally hold hands across the world.
When Juri, Diana and I met Tuesday, we were still discussing an approach to making a prototype for a system that is implausibly expensive to create. For user’s interaction to heed real-time results, we would have to control the many nodes of the output device discreetly, and to do so, means buying a dedicated actuator for each node.
Naturally, we chose the frugal path, designing a mechanism to move the actuator across a grid of nodes, stopping at each node, altering the position (z axis) of said node, and moving on to update the grid.
The problem with this approach is that there is a disconnect between the user’s real-time input and results that would take a long time to render (essentially, the finer the result, the longer the wait). What is the point in making an interactive system, when the user is constantly waiting for the output to catch up to his movements from minutes ago?
Ok, so let’s use a small number of output nodes. But how do we make an attractive result with few nodes?
So, let’s call a few input nodes a finger. And instead of reading pins, we’ll read the light reflected from a finger through a webcam. The computer interface will render video to the user’s actions in front of the webcam. When the user likes a pattern, he or she may click the mouse, to take a snapshot that will print to the output device.
The way we decided to actuate these pins, is touching a matching grid of electromagnets to the backs of the pins, which we glued some sheet metal to. The entire grid of magnets is moved toward the pin board via servo operation (this is that lego device you see in the video). Each time the servo rotates, pin positions are checked, and if the servo & pin position correlate, power to the magnet is shut off by opening the circuit.
Check out this great sketch Juri made.
For brevity, check some features that we’ve tackled so far:
I really responded to Juri’s idea of a surface that would let a user interact with someone who is far away. As some of you know, my girlfriend is in Holland, and as great as video chatting may be, it lacks physicality – sometimes you just need an embrace, or more simply, a hand to hold.
The 3d pin model I’ve had as a kid is still impressive visually, and I’d love to figure out how to read in a 3d shape, process it and update one of these pin sculptures to the specifications of the original shape.
After asking around at ITP, it seems I found something everyone seemed very interested in, but also weary of at the same time.
I propose to first figure out a system that would render a matrix of cylinders by scanning across and updating each cylinder (this is a lower resolution version of the pin model). This means I have to fabricate a machine that can pivot across two dimensions, and then modify the third.
I’m confident that the 3d representation of the data in processing would translate well to the resultant grid, but the actuation of the grid is still the problem. Since this is a realtime activity, refresh rate is an important factor in making a plausible device but the quicker it is, the more costly it becomes.
Matt Parker, a 2nd year ITPer, has a similar vision for his final project last year, so he had much to say about pitfalls & servos. As well as insight on the threaded z-axis shaft I was pondering how to implement.
There were also instructables on the graph pivoter that I was describing to my group members…