Thesis

With some help from different colleagues from work who were willing to be part of my documentation, I was able to document some of the interactions:

 

I also continued to get some really great feedback. The thing I found in this setting was that everyone just wanted to play with it – they were all very curious to try different things and see what would happen. Everyone thought it would be great if it were bigger – taking up a whole wall. Or even if it could make different shapes or waves.

One the Scenic Designers, Ethan Brown, stayed around trying to find different ways to get different colors out of it for almost an hour. Just moving and trying different things. At his suggestion, we put a white board directly behind it to see if we could get some better color mixing to happen:

Ethan in Front of the Wall
Ethan in Front of the Wall WITHOUT with White Board Behind
Ethan in Front of the Wall2
Ethan in Front of the Wall WITH the White Board Behind

It’s not an exact replica – but you can see the difference. Both have they’re pluses and minuses – and it’s definitely something to look at for the future.

Everyone responded really well to be able to see the colors they were wearing being reflected in the piece. Initially, I wanted it to be only a light amber color. But using the Live Camera feed in MadMapper added more of a pixelated reflective nature to the piece that I enjoyed in testing on my own. I would like the alter/obscure the mirroring a bit more. I did find that this drew people in a bit more – they were fascinated by what was happening on the other side of the wall. And you could see when they developed a connection with it.

At one point, two of my friends hugged in front of the wall – Nina Alexander and Jerllin Cheng. It was one of the sweeter moments of interaction documentation that I really loved. Seeing more than one person interact with it was really wonderful. Throughout the process, people kept asking me what would happen when more than one person was in front of it – and I never really had an answer, or a goal in mind. And I think that’s okay – it was wonderful to see more than one person interacting with it evolve. For the most part, it caused more of the wall to light up – but in a lovely way. The way I arranged the pixels to be expressed, they mixed with each other – so you almost couldn’t tell which part was which person. It all mixed to become one. That, I really loved seeing. Seeing how the whole wall would light up when people connected. And you could see that the users enjoyed it as well.

Barbara Cokorinos – the Graduate Design Administrator – came up to interact with the wall. It was wonderful seeing her light up with it – and play with it.

All in all – everyone enjoyed interacting with the piece. I think when I set out to do this, I kept trying to make the interaction and reasoning too serious. Though there are deeper reasons as to why I chose to make a wall – the ultimate goal was to bring about better connection. And Joy is obvious a wonderful way to do that. I’m also happy because it seemed like everyone took something a little different from it. It’s been hard for me to pin down my thesis and real reasoning behind this piece for many reasons. But ultimately, I wanted to create something that people could imprint their own meaning onto. And I was happy to see that happening the more people interacted with it.

Here are some more of my favorite photos:

Nina Examines
Nina Inches in and Examines the Pieces
Eugenia Approaches the Wall
Eugenia Approaches the Wall
Nina and Jerllin Hug
Nina and Jerllin Hug
Just a Wall
Just a Wall
Just Some Lights
Just Some Lights
Ethan Examines
Ethan Examines
Up Close and Personal
Up Close and Personal
Joy
Joy
Checking Out the Other Side
Checking Out the Other Side
Silhouette
Silhouette
Lina Wonders
Lina Wonders
Super Up CLose
Super Up CLose
Behind the Scenes
Behind the Scenes
Aaron Parsekian For the Win
Aaron Parsekian For the Win
Aaaand I'm Out For Now
Aaaand I’m Out For Now

Create a “Public Space” on the Internet

Photo from: http://patosalazar.com/journal/internet-and-the-public-space-part-1/

 

 

This assignment has alluded me a bit. First off, because I am still learning how to code and what exactly the internet it. Being tech savvy has never really been my forte. That being said, I’m learning, and I’m getting better.

One thing that has come up for me with coding is the 404 Error Page – not just because I get them a lot when trying to figure out coding bugs. But also because I’ve stumbled upon some pretty interesting ones – ones that are helpful even, or cause a good chuckle. This blog even helps you set up a 404 error page to Rick Roll your office mates (http://blog.jasonsamuels.net/post/81389582573/how-to-rick-roll-the-entire-office). That, I can get behind.

Along those lines, I thought it would be interesting to set up a “Public Space” 404 Error page, or series of pages, based off of Brian Eno and Peter Schmidt’s Oblique Strategies. And that some of the pages could even have links that would lead you to explore different things – inspire you, sidetrack you in a wonderful way. I put together what I think these might look like – or a few different options for them.

Some of the strategies really speak for themselves as causing a thought to spring from them. Or, simply keying into the fact that you found a 404 Error in a way that’s humorous to me, at least.

 

This one included a link to a google search for “Accretion” – an idea I’m not sold on, but something I thought was interesting.

 

The version above has a link to an article about intonation…something I’m not sure I like having or not.

 

 

 

 

 

 

The first version of this is blank…the second has a link to Google Search because I found it funny, honestly.

 

One has a link to John Cage’s 4’33 on YouTube, and the other links to an NPR article on John Cage’s 4’33.

 

 

 

 

 

 

Another one that could be the 404 Error itself, or have deeper life meaning to it.

 

This one has a link to Brian Eno’s essay Axis Thinking, something I found appropriate to link to this strategy.

 

I’m not exactly sure what technically would go into this – though I think the blog I linked to for the Rick Roll is a good place to start. And I will be honest, that a ‘public space’ online is a tricky concept for me to master. But I’m going to keep exploring this, and see if I can develop it more.

Thought Experiments for Socially Engaged Art

For class, we had to come with 3 different thought experiments – they could be tangible or intangible. Here are my 3:

  1. The conversation around The Spear being recorded, or live streamed, in an adjacent room.
    • In South Africa, I spent a few weeks researching the visual arts and how it’s being used for social transformation. Once there, it was impossible not to hear about a painting by Brett Murray called The Spear. The amount of controversy surrounding this painting is vast – everyone has an opinion about it. The painting is of the current president of South Africa, Jacob Zuma, ina very stylized manner with his genitals exposed. The painting has been vandalized, it was banned for a long time, and Murray was involved in a defamation suit brought on by the African National Congress. Now, the painting stays displayed at the Goodman Gallery in South Africa, behind a curtain. While the gallery wants to keep freedom of expression open and don’t want artists to censor themselves, they understand that the painting is not appropriate for everyone. It’s interesting how the act of hiding the painting almost makes it more dangerous. And the discourse surrounding this painting, to me, seems more interesting than the actual painting. The amount of unrest this painting caused – the attention it brought to so many layered issues, seems incredibly important. I would love to record the different conversations had around this painting, conversations that continue to happen 7 years after it was first shown. I think those are worth recording and listening to separately, in a blank room, without any visuals.

 

 

  1. Project key human rights quotes onto obelisks around the world. The quotes start at midnight at 100% fullness; then slowly over the next 24 hours, fade incrementally. I would like the quotes to be written so that they are vertical – looking almost as though they are raining – the letters on top of each other as you read them.
    • This idea is influenced by a few different art installations and ideas. Pablo Helguera referencing the Chinese calligraphy watercolor in the first chapter of Education for Socially Engaged Art struck me. How it fades after you paint it – focusing on the process of the painting and watching it fade. Marry that with the installation Faith47 did an installation called Feet Don’t Fail Me Now where on the screens at the top of the Johannesburg CBD. On the large screens at the top were the words for ‘home,’ ‘sanctuary,’ and ‘ache’ in various languages spoken in Africa. And finally, I think about the ice sculpture Shintaro Okamoto did in Union Square this year of an African elephant. Life size, as it melted, it represented the extinction of the African elephant. These ideas, and the discourse around them, is what initially inspired this idea of having words fade slowly over time. So often, the ideals of these human rights activists have been lost over time. That though initial change was significant, it’s lasting effects are largely superficial.
    • I like the idea of projecting on these monoliths around the world because they are beautiful in their own way, but also are a symbol of power. And power has been repetitively abused around the world. I hope this is to call attention to that – to act as a reminder for why we created different government institutions is to serve the people, not to rule the people. Some of the quotes I would like to use are these:
      • “All men are created equal.” – US Declaration of Independence
      • “All shall be equal before the law.” – South African Freedom Charter
      • “The rights of every man are diminished when the rights of one man are threatened.” – John F. Kennedy
      • “There may be times when we are powerless to prevent injustice, but there must never be a time when we fail to protest.” – Elie Wiesel
      • “A right delayed is a right denied.” – Martin Luther King, Jr.
      • “We declare that human rights are for all of us, all the time: whoever we are and wherever we are from; no matter our class, our opinions, our sexual orientation.” – UN Secretary-General Ban Ki-moon
      • “To deny people their human rights is to challenge their very humanity.” – Nelson Mandela
      • “It means a great deal to those who are oppressed to know that they are not alone. Never let anyone tell you that what you are doing is insignificant.”  – Desmond Tutu, South African civil rights activist
      • “We are way more powerful when we turn to each other and not on each other, when we celebrate our diversity… and together tear down the mighty walls of injustice.” – Cynthia McKinney, American politician and activist
      • “We cannot all succeed when half of us are held back” – Malala Yousafzai

 

 

  1. Professional sports teams all take a knee every time there is a significant point scored. For example, in football, it would be after every touch down, both teams take a knee. In baseball, it would be after every homerun. (We could say after every point is scored, but we would have to make sure the ball in no longer in play, which is why only on homeruns makes for a clean solution.) For basketball, maybe only after a free throw point has been made.
    • Sports have always been a great unifier for different cultures. Post-Apartheid Africa was able to rally and unify with the winning of the Rugby World Cup – something Nelson Mandela worked hard to promote and used to create a bridge between the Black and White populations. Working in the performing arts, I’ve worked on some really beautiful and impacting pieces. The problem is that the audience is quite limited – often times, you end up preaching to the choir with your various performances of radical ideas. The performing arts community is quite liberal to begin with, so you’re not necessarily opening up new lines of communication. But with the sports world, your audience is quite vast. And watching the Colin Kaepernik protest, and the discourse surrounding that, has been really wonderful. Whether people agree with it or not, it’s starting lines of dialogue. Unfortunately, our country is quite polarized, so maybe that dialogue isn’t quite as open as one would hope. But it’s a start that it’s there. I wonder if there’s a way to take it further. To not just have it be a silent protest pre-game – but also an intervention during the game. Keep it present throughout. See what happens. If it causes more uproar, if it causes a bigger discourse. The likelihood of this ever happening is slim – but how interesting would it be if it were to happen? Would more people boycott? Or would the larger sports audience be receptive to it? Could we use the larger audience generated by sports to create an even bigger impact on social change?
    • Since I’ve been in school, I’ve kept this thought in the back of mind – how can we reach a bigger audience? Professional sports are often times the entertainment for the “traditional working man” (forgive me if this term is archaic – I’m hoping you know what I mean by it, or can help me find a better descriptor). I always remember after a long day, my Dad would come home and unwind, watching a 49ers game on Thursday night. And it was great – watching something that you could cheer for, get riled up over, plot a strategy with. And you can see by the sold out stadiums, the high TV ratings, and the conversations sparked by different games how far the world of sports reaches. The day after a World Series game at work is always wonderful – checking with my coworkers around the whole building about what happened, and what they thought. There has to be a way to tap into that. I think Kaepernik did a wonderful job doing that – sparking debate, and creating awareness. Say what you will about his football career – he was able to use his sports celebrity for a bigger purpose. This idea is just one next step that could happen. Again, it’s not realistic at all. But I do think it would be interesting to see what would happen to the game with this action throughout it.

 

 

Week 6: Canter Stop This Feeling

 

My print was ready earlier this week, and here it is:

It was a really great experience having it printed by LaGuardia Studios. The quality is really wonderful – so I’m definitely happy with it. I wish I had more time to re-do the model now that I see how it’s printed out. The lower shapes I wish were more segmented, and I wish I had modeled things differently on the face – made things a bit more stylized. But overall, I’m really happy with it. I love the grain of the print – I really do. I’m so happy I chose to print it in a way that that could be a feature.

Next I moved on to mounting it to my metal structure. I chose to had a stand off from my piece – so the head looks like it’s floating more. I wanted just a simple and clean connection point, and I’m happy with this one. I’m so happy I switched to using the metal for the back plate. I think it’s incredibly interesting. And the light with it adds to that. Overall, here are some shots of my final piece.

 

Here are renderings I went through to get to where I ended up:

 

And here is the drafting I did for this piece:

Week 5: Continuing to Horse Around

I really wanted to get my final printed by LaGuardia Studios – I’ve never really worked with them before, so I wanted to see what it would be like. In order to get my final printed in time, I knew I needed to get it to them this week. I started by getting my model to a better place, and then scheduled a consultation with them. There, they were really helpful at helping me figure out how to finish it off in order to hang it on the backing piece I was thinking of using. Then I was able to continue to work on the model that night, and turn it in to them the next day.

I wanted my print to have a geometric look to it – I also chose to print it in a way that I think the texture/design of the printer will be interesting. I really want to embrace the styled of it.

I have found that working in VectoreWorks to get my sizes accurate and proportions right was easier. Honestly, the object dialog box was amazing to be able to tweak things accurately. But, I was missing some of the newer tools we had learned in Rhino to make things more organic. So once I was happy with the base in VectorWorks, I exported the file to Rhino. Here’s what the file looked like in VectorWorks pre-export:

Then, once I was working in Rhino, I used the CageEdit tool on the main part of the horse head. I didn’t want to drastically change any sizing – I just wanted to try to get some more organic sculpting on his face, and the CageEdit feature really helped with that:

 

Then I put the model in Cura – just to see how the print would look and so I could see a little more what adjustments I wanted to make:

The thing putting my model into Cura also helped me realize was that I needed to close of my objects. I had gotten so into sculpting the top, that I had forgotten to add thickness. I realized this a little too late before my appointment. But luckily, the folks at LaGuardia Studios are incredibly patient and helpful. I tried a couple simple things to try to close off the back of my model – the simplest being to make a plane and trim it the profile of the horse head. That worked for everything, but one single shape – the weird twisted dome I had made as you can see below:

That’s where the LaGuardia staff was insanely helpful. They helped my realize that I had gotten a little trim happy with some of my shapes. If I had left my twisted hemisphere whole and not trimmed it to the regular hemisphere and cylinder, then I wouldn’t have had a problem. We tried a patch, creating a different plane along a curve – a few different things in Rhino. Then, they decided to take my file into NetFab because it’s easier to check objects and repair them in that program. There we could see that the trimming I had done had turned some of the inside of the twisted hemisphere towards the outside – which is why the various techniques we had tried in Rhino weren’t working. Luckily, I still had the original objects in VectorWorks untrimmed, so I was ample to import those back to Rhino, leave them whole, then export the whole thing again to NetFab where we were able to quickly repair the objects. I also learned about welding the surfaces and vertices together to help make the object more whole. Pictured below are the different levels of the finishing process we went through at LaGuardia. They were incredibly patient and understanding in showing me how to make a healthier object – it was really wonderful.

 

So now I’m waiting for my object to print – and I check in with LaGuardia in a couple days to see how my print is doing.

In the mean time, I’m working on the mounting plate for my horse. I want to see it printed before I make a final decision – but I’m thinking about reusing a metal piece I had made for another class:

It’s some angle iron I had welded into an amorphous shape – which, is a lot like how this horse profile feels. I still want to mount some LEDs behind it to the light shine through. This is a bit of a last minute thought – so I’m not sure if it will look like I’m doing too much with one thing. But once I see the final print, then I’ll decided whether to use a wooden plaque, or this. Either way – I’m ready with both things, and looking at LEDs for the light up part.

 

 

Week 4: Final Project Proposal

For my final project, I would like to continue to work on my ‘Ode to a Horse Lamp.’ My current thought is to create a sort of plaque for the profile of the horse head to mount to. I’m still playing with exactly what I want out of the piece, but my current thought is that it will be mounted to a wooden board with a piece of plexiglas/acrylic in between. The horse will mount to the acrylic, and the acrylic will possible have some LEDs imbedded in it so the there’s a glow behind the horse head.

I would like for the horse head to be more stylized – more of an interpretation than a copy of the Horse Lamp. This is part of why I opted to only to half of the horse head as I have. My thought is that with the flat profile being on the 3d print plate, I could get more detail on the top surface, and not have to worry so much about the overhangs, and overhang structure. And my hope is that the way it prints will create interesting patterns. The original horse lamp has these old wood grain imprints, and I’m hoping I can create a 3D print grain by doing it this way. I’m hoping to not have to sand the piece. With the overhang structure, I had to break off and sand the pieces – which is totally fine, except that I didn’t like how the sanded portions looked versus the un-sanded 3D print grain, if you will. So I’m hoping to embrace and feature the 3D print nature of the piece and avoid overhangs all together by focusing on the profile.

I also toyed with 3D in Vectorworks these last couple of weeks. I started only using it to ‘sketch’ and find the right proportions I wanted. In Rhino, I’ve been missing having an object dialogue box. VectorWorks has one, and it helps to slightly adjust things while maintaining accuracy. So I started in 2D, then went ahead in 3D just to experiment some more. I found I was more accurately able to het what I was looking for. I’m still working in Rhino – and still working things out in Rhino, but I did find my various VectorWorks experiments useful.

My goal is to keep playing with both and see how I can better make my model. I spent most of my time modeling these last 2 weeks – finding that to be most beneficial for the final.

Week 3: A Horse Of A Different Color

For this week, we needed to work on our first print. Since our first could be anything, I decided that I would make a little ode to my horse lamp. I’ve done a laser cutting prep version of my horse lamp before for Piecing It Together, so I decided to make a little tribute in with a 3D print as well.

So this is my beautiful OG Horse Lamp…it goes without saying, but it’s a really good lamp:

This is an old ‘getting to know your horse lamp’ sketch that I did a while ago…

And here I was brushing up on how I wanted the base to work and insert.

One thing I quickly found was that once I started putting working on getting the right shapes for the horse head in Rhino, it might be better if I simplify and work with whole shapes more. So sketched a bit more to try to get better tactics to get the horse shape:

I would still like to look at how working with the different extrude along paths, and creating planes between points could help with creating some of the more complex shapes here, but I also was anxious to finish so that I could get to printing. So I kept with the more ‘simple’ shapes for now and will continue to practice and get to know the intricacies of Rhino and all its glory.

The base was easy enough to get done: just a cube with some cylinders subtracted for for the head to fit in, and the corners filleted:

I left the inside a little more hollow, thinking one day I may add a little LED inside so it could light up one day – a place to sneak wires and things:

The horse head was more difficult…you can see where I made the peg at the base to fit in the square base. But past that, I started having issues creating the organic swoops of the horse. I tried going about it in different ways – I found creating a plane using 3-4 points incredibly useful at times, and lofting useful – but figuring out which and how to loft a little troubling. As you can see, I also went about the mane in a couple different ways. I tried a ‘boxy’ version, which I didn’t mind aesthetically, but created a surface and extruding it along a path. But then I made think ellipse and found that much quicker, and looked a bit better as well.

I’d love to keep practicing getting more detail in the horse face itself – so I’m going to keep working on it, and see how he develops.

I had a few issues with printing – I was planning on using the Ultimaker – I prepped everything in Cura for the Ulitmaker, and was ready to go – having checked it out on the GoogleCal and everything. But then coming in to use them, they were all taken – I think everyone (including myself) underestimated how long the prints would take.  So I opted to try the Lulzbot Taz6. But – the nozzle wasn’t heating up right? Or the PLA wasn’t the apprpriate PLA? I’m not sure – after some heartache, Chester played with it a bit and got my print going! Which was great – but in the mean time of that, I was able to get it started on the Ultimaker. But Chester started my print on the Lulzbot and said, “Let’s just see how it goes.” So, I ended up having a little race going between the Lulzbot and the Ultimaker – which was pretty awesome to see, and I’m pretty lucky I was able to see just how different the rate of print and quiality of print was. My Ultimaker was a quarter of the way done when the Lulzbot (which started AFTER the Ultimaker) was already half of the way done. I didn’t want to hog the printers – but it seemed as the the traffic jam of waiting on printers had died down, so I figured I’d see how it all played out. I know Jaycee explained to us some of the pros and cons – saying exactly that: one is more detailed by slower, the other quicker by less detailed. But really seeing it going side to side was pretty awesome.

Here we are getting started on the Ultimaker:

And now here’s the Ultimaker 90 minutes into the print:

 

And here’s the Lulzbot 45 minutes into the print:

 

So at this juncture, I think that I definitely am happy with just trying things out on the Lulzbot first, and then going from there. Especially for something like this where I’m just getting to know not only the print, but the print. It’s been really interesting seeing how the supports are used – what’s needed. For example, I think in the future, for this print, I would support the nose of the horse, the but the automatic supports added to the mane is unnecessary. I think I could have saved some time by adding in my own support, rather than letting the Ultimaker and Lulzbot do it for me. Although, the automatic supports added by the Lulzbot are much more minimal than the Ultimaker (another reason why it’s probably quicker).

And here’s the Lulzbot Print with supports:

 

Here’s the Lulzbot without supports:

 

The Ultimaker with its supports:

 

The Base on the Lulzbot:

 

And here’s the finished products – the black print being from the Ultimaker, the silver from the Lulzbot:

Week 2: Making Things Fit

This week our assignment was to make something that could fit together with another thing. I decided that I wanted to make a little picture frame for a little polaroid I have.

I started by measuring the photo itself with my calipers, and drawing that using rectangles. I wanted the picture to be centered in the frame, even though the photo isn’t centered on the polaroid photo. So I spent some time working on that.

Then I wanted to make sure that I could hang the picture, so I decided to do the little push pin hanger that you often see in photo frames – where there’s almost a teardrop shape that the push pin can enter into and hang off of.

I then decided to add a little planet and star the frame for fun, so it followed the theme of the picture with the little astronaut man.

The final thing I did was draw a line to use as a split line so the back and front of the frame can come apart. I think I need to work on how these pieces fit together and stay together better – so that’s the thing I will continue to work on. But in the mean time, here’s what I worked on this week:

Things I’m continuing to work on:

– Moving objects accurately

– Working with the Z-Axis accurately

– Measuring things with calipers

– Figuring out how the objects fit together

– Rotate and rail commands

– Working with metrics

– Getting with the flow of working with Rhino

 

 

Week 1 – Getting to Know Rhino

This week we worked on getting to know Rhino, trying to create something that fit into a 3″x3″x3″ cube (or 76.2mm x 76.2mm x 76.2mm, as I’m learning). I’m used to drafting for my job – I first learned using AutoCAD, and now I use VectorWorks. I’ve done some 3D work with those programs – but not much. The first thing I kept finding myself doing was resorting to old key strokes for those programs – out of habit, I kept hitting the ‘space’ bar hoping to get to a grabber hand, only to find myself in the last command I used. I’m also used to putting in values by ‘tabbing’ through the different measurements I can enter – which Rhino does not do. So I’m still learning how to find a new flow with working in Rhino.

I wanted to create a  pattern this week using an image I found when researching lattice patterns.

Thought the image looks simple, I quickly found that it was more difficult to create with Rhino. Particularly trying to rotate objects on the Z-Axis. But I did delve into some other interesting things. I worked with pyramids, rectangles, spheres, boxes, and ellipses. I’m used to drafting objects in 2D with different views for each – so the most difficult thing I found was getting out of that space and working with the Z-Axis. I really enjoyed the different combinations of adding, subtracting, and intersecting objects. I found the holding ‘alt’ down while moving an object copied it – a carry over from VectorWorks that I really enjoyed. Accuracy I’m still having issues with. I love the snap zones – but moving with coordinates is difficult when I’m still discovering how I want something to look. (A lesson in going in with a plan.)

Below are some working screen shots of things I was trying along with where I ended up for now:

 

MashUps Midterm: Photo of the Day Mash

For my midterm for API MashUps I wanted to play with photos. The initial idea was to put two photos next to each other and have different settings and styles you could apply to them – if Photoshop and a RJD2 had a red headed step child, this is what it would look like. What I ended up with is not quite there – but we’ll get to that.

To start, I chose the NASA API to work with – thinking it would be easy to get information from that I could play with. I ended up using the Photo of the Day API – which in retro spect, I wish I had used something more dynamic. But for beginning purposes, it was a good place to start. I had trouble figuring out the correct AJAX call to get the data. I ended up finding some GitHub NASA repo that had tutorials that were very helpful. So I was able to go from this:

 

To this:

 

So then I set out to work with the Flickr API and see about getting information from that. That also took me a bit to figure out the correct call. I ended up taking a picture and posting on Instagram the console after I got data back for the first time:

From there, I set out on displaying images. As you can see from the photo below, I was pleading with my code to work:

 

But once I got past that hurdle, I started playing with the CSS and displaying images differently. At this point, I was still hard coding the search term. My hope was to see what different attributes did what in the CSS – using different filters and display options. My initial thought was to use this to help plan out how I would be able to arrange my photoDJshop. As I would find out later, affecting the CSS in the way I was visualizing isn’t really possible – but more on that later. For now, here are some photos of random calls of red pandas and Fenway (the greatest ball park on Earth).

 

So going into the weekend before Midterm, I thought I was in a good place. I had my 2 APIs working well separately, and had done different research with them. At this point, I figured the best bet was to pull the NASA Photo of the Day, display that first, and then use the title from the Photo of the Day as a search term for Flickr – using a random number to pull a photo from the array. Then, from there, focus on figuring out how to code the photo style properties to be dynamic.

Here is where I ran into issues. So I started with one ‘small’ goal – to code a slider to change the opacity of Flickr photo. The problem I found was manipulating the CSS on a sliding scale. I found a lot of documentation for changing the style completely – you can call different functions for the style properties using a click button situation. (It’s either blue or green, this function is either on or off.) But getting a sliding scale was more difficult. At first, I imported the p5.js library because I remembered there was a slider I liked using there – I was able to get that to work for a little canvas, but getting that to change style properties was proving more difficult. So then I looked at adding a slider in the HTML – but quickly, I found that to not be useful. Then I looked at trying to use a slider with jQuery – again, it looked interesting, but I was running into the same issues as with the p5.js slider – except that I know less about jQuery than p5.js. So I went back to the p5.js slider, and tried to see if I could create a function to help change the opacity. My hope was to use the p5.js map() call to map the opacity from the CSS the slider position. But things still weren’t working – I could get something to work in the JS to affect the JS, but crossing to a floating variable in the CSS was not working.

As you can see in the commented out areas, I was trying a lot of different options. Before class I spoke with an ITP alumnus, Ruta, and we worked on it together trying different options for affecting the CSS through JavaScript. Ruta helped me by using id tags in my htmlString to see if that could help. It did – but, it rpoved more and more that the way I was wanting to create a live variable in CSS through JS wasn’t possible. At that point, I thought about ways to re-do the Flickr image display in JavaScript – using the p5.js style properties. It was too late to do it for class – but it’s something I’m going to keep looking at. Below is a screen shot of what I ended up showing in class:

As you can see, there’s a slider that makes a little 80px canvas change in grayscale – but nothing past that. It was also at this point that I wish I had spent more time working on the content than the CSS. For example, figuring out how to parse key words out of the NASA photo title. The search term for Flickr would be the whole title of the NASA photo of the day – which would search really random, not really related images from Flickr. I think that content would’ve been much more interesting if I had been able to get that sorted out, and then gone on to the styling interaction. But – that’s just more to work on for the future.

I’m not entirely sure where I want to go next – if I want to continue on this project fixing the bugs (search terms, and JS animations and filters). Or if I want to take what I’ve learned here, then reapply more creatively to some different data visualization. I’d like to create something more dynamic that what I ended up with.

All in all, I think I learned more about CSS and JS than ever before – which is exciting. I have a much better idea of scope and how these two things work together. I remember in ICM spending some time on the HTML and CSS, but I didn’t really understand it past what was prepped for us in the files preloaded on the p5 Web Editor, honestly. I’m interested to play with more APIs and see what information is available. It seems like each API is unique in how to use the keys – so I’m interested in practicing that more. I honestly tried picking APIs that were easy to use so that I could get to the meat of this project (and even those were difficult for me to get data back on). But after going through this, I think if I had taken more time to look into more APIs and data sets, I could’ve gotten something more engaging. I will say, I’m less scared of APIs than 2 weeks ago – so I’m eager to see what else is out there.