Presentation is loading. Please wait.

Presentation is loading. Please wait.

Programming For Artists Lesson 8 Playing with pixels Learning objectives: Access and manipulate the pixel array.

Similar presentations

Presentation on theme: "Programming For Artists Lesson 8 Playing with pixels Learning objectives: Access and manipulate the pixel array."— Presentation transcript:

1 Programming For Artists Lesson 8 Playing with pixels Learning objectives: Access and manipulate the pixel array

2 Ready-made image manipulation methods

3 The Img.mask() method works because the light areas of the grayscale image allow The original through, the mask has to be graysacle – you could create this mask in Processing and save the image file like we did last week HOW HANDY! The masking image should be the same size as your other image Creating an alpha image mask : Black areas of the mask dont allow the image through:

4 This is the mask image

5 This is the mask image inverted black part of mask image stops image underneath coming through

6 Masks part of an image from displaying by loading another image and using it as an alpha channel. This mask image should only contain grayscale data, but only the blue color channel is used. The mask image needs to be the same size as the image to which it is applied. See processing reference for more enter mask in search


8 I inverted the masking image To make it work the way I wanted

9 Using blur to create a shadow Animating a blur





14 Tints

15 But you will be making your own tints by the end of this lesson – accessing the red() green() and blue() components of the pixels[] array

16 blend1 Blending images This image of two lines is being blended with itself in ADD mode. Can also be done with image files (example coming up)

17 The variables in the blend() function refer to the whole image not the lines So if I change the width of the destination image it looks like this Where there is an overlap you can see the result of ADD

18 Blending images The blend() method Mixes pixels in different ways depending On its last parameter, mode. The previous image was blended with itself The blend() method has NINE yes NINE parameters, and can be overloaded with ten. In our example they are: blend(x, y, width, height, dx, dy, dwidth, dheight, Mode); Where dx and dy etc mean the destination image Modes can be BLEND. ADD, SUBTRACT, and lots more (look them up and experiment)

19 Parameters x int: X coordinate of the source's upper left corner y int: Y coordinate of the source's upper left corner width int: source image width height int: source image height dx int: X coordinate of the destinations's upper left corner dy int: Y coordinate of the destinations's upper left corner dwidth int: destination image width dheight int: destination image height srcImg PImage: an image variable referring to the source image MODE Either BLEND, ADD, SUBTRACT, LIGHTEST, DARKEST and more

20 blend2 Here our image of two lines is blended with a jpg file in DARKEST mode The source pixels of (A) the jpg, are blended with the pixels in the destination image, only Allowing the darkest through in this mode, Here the background comes through as it is black

21 In LIGHTEST mode the destination image comes through as it is white And LIGHTEST allows through only the lightest pixels of the destination image So the background doesnt come through

22 Ten parameters are used here, the first (which is our extra parameter) points to a source image, in this case img2 If the srcImg parameter is not used, the display window is used as the source image. So img will blend with img2, additively blending at an x,and y of 12,12 and 76pixels by 76 pixels of itself, img2 is blended at this size and coordinate set also The last line image(img, 0, 0); draws the new modified image, with the blended section at 12, 12, 76 by 76 Blending two image files Using the blend() method of the Pimage class. Blends a region of pixels into the Image specified by the img parameter.

23 Playing with pixels You can directly access pixels by using the methods get() and set() or by loading the pixels[] array and manipulating it directly

24 get() Reads the color of any pixel or grabs a section of an image. If no parameters are specified, the entire image is returned. Get the value of one pixel by specifying an x,y coordinate. Get a section of the display window by specifying an additional width and height parameter. If the pixel requested is outside of the image window, black is returned. The numbers returned are scaled according to the current color ranges, but only RGB values are returned by this function. For example, even though you may have drawn a shape with colorMode(HSB), the numbers returned will be in RGB.

25 Using get()

26 set() Changes the color of any pixel or writes an image directly into the display window. The x and y parameters specify the pixel to change and the color parameter specifies the color value. The color parameter is affected by the current color mode (the default is RGB values from 0 to 255). When setting an image, the x and y parameters define the coordinates for the upper-left corner of the image

27 Using set()

28 You can use set() to write an image to the window Its faster then using image(trees, 0, 0) etc; but you cannot transform tint or resize using set() Actual jpeg

29 Getting the color of a single pixel with get(x, y) is easy, but not as fast as grabbing the data directly from pixels[]. The equivalent statement to "get(x, y)" using pixels[] is "pixels[y*width+x]". The (BETA) version of Processing requires calling loadPixels() to load the display window data into the pixels[] array before getting the values.

30 writing our own image manipulation methods

31 The pixels[] array stores a colour value for each pixel of the display window The loadPixels() method must be called before the pixels[] array can be used If the pixels have been read or changed then the updatePixels() method must be called. loadPixels() and updatePixels() work together to ensure that the pixels[] array can be manipulated and any changes updated.

32 pixels[] is an array containing the values for all the pixels in the display window. These values are of the color datatype. This array is the size of the display window. For example, if the image is 100x100 pixels, there will be 10,000 values and if the window is 200x300 pixels, there will be 60,000 values. The index value defines the position of a value within the array. For example, the statment color b = pixels[230] will set the variable b to be equal to the value at that location in the array. Before accessing this array, the data must loaded with the loadPixels() functions. After the array data has been modified, the updatePixels() function must be loaded to update the changes. Without loadPixels(), running the code may (or will in future releases) result in a NullPointerException.


34 Making your own filters

35 /* red() green() blue() represent individual color components in the array Read them then change them and return them to the pixels[] array, update() it to see changes. EXPERIMENT: if values are multiplied by 2 the image becomes lihgter if divided by 2 (/2) the image will become darker The for loop goes through every pixel in the array, reading it and changing it then we update() the array */ PImage arch; void setup(){ arch = loadImage("arch.jpg"); } void draw(){ background(arch); loadPixels(); for (int i = 0; i < width*height; i++) { color p = pixels[i]; // Grab pixel //INVERT THE IMAGE /* float r = red(p); // Modify red value float g = green(p); // Modify green value float b = blue(p); // Modify blue value */ /* //make image lighter by multiplying values: float r = 255*0.4 +red(p); // Modify red value float g = 255 *0.4+green(p); // Modify green value float b = 255*0.4+ blue(p);//Modify blue value */ //make image darker by dividing values: float r = red(p)/2; // Modify red value float g = green(p)/2; // Modify green value float b = blue(p)/2;//Modify blue value */ //EXPERIMENT MAKE YOUR OWN FILTERS!! pixels[i] = color(r, g, b); // Assign modified value } updatePixels(); } Try this, your own tint, no need For a ready made Processing function! Try to do this. (clue: uses a random value from 0 to 255)

36 Using a convolution matrix Convolution changes the value of each pixel in relation to the neighbouring pixels A matrix of numbers called a convolution kernel is applied to every pixel in the image. Neighbouring pixels are multiplied by the corresponding kernel value and added together to set the value of the centre pixel. Doing this to every pixel in the image is convolving the image. Most Photoshop filters use this type of operation. See the Processing book 360 – 363 // code fragment from a 3 X 3 convolution kernel: float[][] kernel = { {-1, 0, 1}, {-2, 0, 2}, {-1, 0, 1} }; First an Interlude to explain the multi-dimensional array or an array of arrays:

37 This is an array of arrays (also known as a multidimensional array). First of all we are using two or more sets of square brackets, such as: String[][] names. Effectively rows on left and columns on right In the Java (& therefore Processing) programming language, a multidimensional array is simply an array whose components are themselves arrays. A consequence of this is that the rows are allowed to vary in length. (so called jagged arrays). String[][] names = {{"Mr. ", "Mrs. ", "Ms. "}, {"Smith", "Jones"}}; print(names[0][0] + names[1][0]); //Mr. Smith print(names[0][2] + names[1][1]); //Ms. Jones } } The output from this program is: Mr. Smith Ms. Jones More information at the Java tutorial here: BACK TO CONVOLUTION KERNEL

38 Neighbouring pixels are multiplied by the corresponding kernel value and added together to set the value of the centre pixel. Doing this to every pixel in the image is convolving the image.

39 remain the same when the image is convolved. If the sum is a smaller or larger number the image will become darker or lighter in value than The original


41 Experiment by modifying the values in the kernel, use the chart from the previous slide to start your experiments

42 Here Ive changed the kernel into something Closer to an edge detection algorithm Edge detection algorithms often have negative numbers along one side and positive numbers on the opposing side with zeros in the middle, exactly what Ive done here

43 // FULL CODE convolution kernel for creating lots of effects //above 1 = lighter //below 1 = darker float[][] kernel = { { -1, 0,1 }, { -2, 0, 2 }, { -1, 0, 1 } }; void setup(){ size(100, 100); } void draw(){ PImage img = loadImage("arch.jpg"); // Load the original image img.loadPixels(); // Create an opaque image of the same size as the original PImage edgeImg = createImage(img.width, img.height, RGB); // Loop through every pixel in the image. for (int y = 1; y < img.height - 1; y++) { // Skip top and bottom edges for (int x = 1; x < img.width - 1; x++) { // Skip left and right edges float sum = 0; // Kernel sum for this pixel for (int ky = -1; ky <= 1; ky++) { //will help us look at neighboring pixels for (int kx = -1; kx <= 1; kx++) { // Calculate the adjacent pixel for this kernel point int pos = (y + ky) * width + (x + kx); // Image is grayscale, red/green/blue are identical float val = red(img.pixels[pos]); // Multiply adjacent pixels based on the kernel values sum += kernel[ky+1][kx+1] * val; } // For this pixel in the new image, set the gray value // based on the sum from the kernel edgeImg.pixels[y*img.width + x] = color(sum); } // State that there are changes to edgeImg.pixels[] edgeImg.updatePixels(); image(edgeImg, 0, 0); // Draw the new image } //end

44 Check out the blur and edge detection examples in Processing They also use convolution kernels. Look at this code example overleaf, //example from its interesting because it allows you to target specific sections of the image:

45 //example from //in this clever example the convolution only effects the area where the mouse is PImage img; int w = 80; void setup() { size(200, 200); frameRate(30); img = loadImage("end.jpg"); } void draw() { // We're only going to process a portion of the image // so let's set the whole image as the background first image(img,0,0); // Where is the small rectangle we will process int xstart = constrain(mouseX-w/2,0,img.width); int ystart = constrain(mouseY-w/2,0,img.height); int xend = constrain(mouseX+w/2,0,img.width); int yend = constrain(mouseY+w/2,0,img.height); int matrixsize = 3; // It's possible to convolve the image with // many different matrices /* float[][] matrix = { { -1, -1, -1 }, { -1, 9, -1 }, { -1, -1, -1 } }; float[][] matrix = { { -2, -2, -2 }, { -2, 9, 0 }, //ctd

46 { -1, 0, 0 } }; loadPixels(); // Begin our loop for every pixel for (int x = xstart; x < xend; x++) { for (int y = ystart; y < yend; y++ ) { color c = convolution(x,y,matrix,matrixsize,img); int loc = x + y*img.width; pixels[loc] = c; } updatePixels(); } color convolution(int x, int y, float[][] matrix,int matrixsize, PImage img) { float rtotal = 0.0; float gtotal = 0.0; float btotal = 0.0; int offset = matrixsize / 2; for (int i = 0; i < matrixsize; i++){ for (int j= 0; j < matrixsize; j++){ // What pixel are we testing int xloc = x+i-offset; int yloc = y+j-offset; int loc = xloc + img.width*yloc; // Make sure we haven't walked off our image, we could do better here //ctd

47 loc = constrain(loc,0,img.pixels.length-1); // Calculate the convolution rtotal += (red(img.pixels[loc]) * matrix[i][j]); gtotal += (green(img.pixels[loc]) * matrix[i][j]); btotal += (blue(img.pixels[loc]) * matrix[i][j]); } // Make sure RGB is within range rtotal = constrain(rtotal,0,255); gtotal = constrain(gtotal,0,255); btotal = constrain(btotal,0,255); // Return the resulting color return color(rtotal,gtotal,btotal); }

48 We can do all this on video files (.mov) and on live video input. This is a very good time to start reading the Processing books. There are lots more methods and ideas for you to experiment with, we have only scratched the surface!

49 /*using an image as a colour palette, from the Processing book by Casey Reas and Ben Fry */ PImage img; void setup() { size(300, 300); smooth(); frameRate(0.5);//really slow img = loadImage("palette10x10.jpg"); } void draw() { background(0); for (int x = 0; x < img.width; x++) { for (int y = 0; y < img.height; y++) { float xpos1 = random(x * 10); float xpos2 = width - random(y * 10); color c = img.get(x, y);/*get colour from image and put in stroke below to colour random lines */ stroke(c); line(xpos1, 0, xpos2, height); } This image file is in a folder called data In the sketch folder, its colours are used to stroke the lines in this program

50 Labs recap: Please complete all labs and upload to your site with all your code available online before the last class on July 14th. COMMENT your code & acknowledge any code you have taken from other sources For each lab indicate which one is the final submission (we wont mark more than one lab for each task). PTO >>>>

51 Lab 1 consists of two tasks a) Draw a creature in Processing b) Create a data visualisation (can be any type of data) Lab 2 consists of two tasks a) Make your creature with relative values – You may have to simplify the critter, and definitely work out the relationships between shapes, see my cyclops example b) Recreate one of the Mondrian/Klee examples, can you adapt them to visualise data somehow? Lab 3 a) Make your own patterns using some of the looping structures we learnt about today. Make sure you comment your code: //explaining what you have done and why b) Make your creature move across the screen and back. If you have already done this, see if you can make 2 creatures collide and might have to research this – look at the Processing examples and search the forum. Lab 4 * Combine conditional logic (if, else) with logial operators (&&, ||, !) to create different conditional outcomes (more exciting variations on the examples in this power-point!). * Use random() to generate unpredictable numeric values that might influence the outcomes * Look at the pattern examples from last week's power-point if you need ideas for loops * If you have strong design/visual skills make sure they are deployed in this exercise Lab 5 Writing functions/methods: a)Write your own void and non-void functions in one program that draws something interesting to the screen (you can do a sound example as well if you are able). Add an overloaded version of one of your methods and invoke it. b) start to think about a mini project that will be handed in on Saturday 14 th July. On that day you will present your mini project (put it online as well with the commented code) You will also make a minute presentation on an artist/project that interests you. Document your project and presentation So your lab 6, 7, 8 and 9 work is to research and design your own mini-project. Also prepare a minute presentation on a digital artist/artwork that interests you.

Download ppt "Programming For Artists Lesson 8 Playing with pixels Learning objectives: Access and manipulate the pixel array."

Similar presentations

Ads by Google