Digital creation is just data. Regardless if the media is an image, a video, a 3D model or a soundfile, you can break everything down into ones and zeroes. You can modify, convert, adapt, reuse or combine pieces of data to create new assets, effects or whatever you need. Technical art is about breaking down every part and piece of the process, and recombining the pieces in a new and exciting way. It is like seeing the matrix and rewriting it to fit your needs.
You could, for example, use a shader to create a mandala out of any image that you provide, by simply duplicating the image as a texture, and rotating the uv coordinates by a certain amount of degrees, then blending the textures with the lighten blendmode, that you know from image processing software like photoshop. actually, this is exactly what I am doing with my Mandala Shader.
you could make a vertex shader that reacts to the music played. you could convert the volume information of the playing track into a float value and feed that into the shader, so that the mesh expands and contracts to the music. why not go a step further and create a black & white noise or cloud map, that you can scroll over the uv coordinates, you can even reuse the volume value for the speed. and then you can actually define that the mesh deforms not only based on volume, but also based on the texture. this would make the deformation look unique all the time.
speaking of music, while I was making my Pragaras – Game prototype, I came up with the idea of creating a 3D rigged and animated model, that could play any song automatically, by reading the midi data and playing the appropriate animations for hitting the right drums. Advanced versions of this would use animation blending and the speed of the animations should be adjusted to get the timing right. I would have loved to include this in my prototype but unfortunately I did not have the time for it.
Did you know that you can create bump / normal maps from grayscale images? well, if your asset has a texture, why not write a shader that takes the luminance of the texture and converts and applies that as a normal map?
speaking of textures, an image file has three channels, Red, Blue and Green, and sometimes a fourth one, the alpha map. these are grayscale images that can be viewed and processed individually. if your asset takes a lot of textures to look good, why not combine some of them into a single texture? you could put the ambient occlusion into the R channel, the specuarity/glossiness map into the G channel, and so on.
I can help you bridge the gap between programming and design, art and technology, logic and emotion. I can
You can see some of my previous projects, and read about how they were made. I would love to add your project to the list below.