Bitmaps

An Android Studio exclusive for in depth analysis of images!

Note : you dont have to use bitmaps with opencv, :) Bitmaps are a fundamental concept in Android development, representing images in a pixel-based format. In Android Studio, the Android API provides the Bitmap class, which allows developers to create, manipulate, and display images efficiently. Bitmaps are widely used for various purposes, such as displaying images in ImageView, creating custom graphics, and processing images in different ways.

Key Features of Bitmaps:

  1. Image Representation: Bitmaps are used to represent images as a grid of pixels. Each pixel's color and transparency are stored in the Bitmap, allowing developers to manipulate individual pixels.

  2. Memory Management: Bitmaps consume memory, and developers must handle memory management carefully to prevent OutOfMemoryErrors. Different strategies, such as scaling or caching, can be employed to optimize memory usage when dealing with large images.

  3. Image Loading: Bitmaps can be loaded from various sources, such as resources, assets, or the internet. Additionally, they can be created programmatically to draw custom graphics or perform image processing tasks.

  4. Image Manipulation: Bitmaps provide methods to manipulate images, including scaling, rotating, flipping, and applying color filters. These operations can be used to create various visual effects and optimize image display.

  5. Displaying in UI: Bitmaps are often used to display images in user interfaces through ImageView or other custom views. They can be easily loaded into ImageView using setImageBitmap().

  6. Image Processing: Bitmaps serve as the foundation for image processing tasks like face detection, object recognition, and computer vision algorithms, where pixel-level analysis is required.

In this project we are trying to get all the orange rgb values of a picture so we can find the orange pixels (this project will be continued in the next page)

to do this we need to first implement the init and the start stage, where we take a bitmap picture.

import android.graphics.Bitmap;
import android.graphics.Color;
import android.graphics.ImageFormat;
import android.os.Handler;

import androidx.annotation.NonNull;

import com.qualcomm.robotcore.eventloop.opmode.LinearOpMode;
import com.qualcomm.robotcore.eventloop.opmode.TeleOp;
import com.qualcomm.robotcore.util.RobotLog;

import org.firstinspires.ftc.robotcore.external.ClassFactory;
import org.firstinspires.ftc.robotcore.external.android.util.Size;
import org.firstinspires.ftc.robotcore.external.function.Consumer;
import org.firstinspires.ftc.robotcore.external.function.Continuation;
import org.firstinspires.ftc.robotcore.external.hardware.camera.Camera;
import org.firstinspires.ftc.robotcore.external.hardware.camera.CameraCaptureRequest;
import org.firstinspires.ftc.robotcore.external.hardware.camera.CameraCaptureSequenceId;
import org.firstinspires.ftc.robotcore.external.hardware.camera.CameraCaptureSession;
import org.firstinspires.ftc.robotcore.external.hardware.camera.CameraCharacteristics;
import org.firstinspires.ftc.robotcore.external.hardware.camera.CameraException;
import org.firstinspires.ftc.robotcore.external.hardware.camera.CameraFrame;
import org.firstinspires.ftc.robotcore.external.hardware.camera.CameraManager;
import org.firstinspires.ftc.robotcore.external.hardware.camera.WebcamName;
import org.firstinspires.ftc.robotcore.internal.collections.EvictingBlockingQueue;
import org.firstinspires.ftc.robotcore.internal.network.CallbackLooper;
import org.firstinspires.ftc.robotcore.internal.system.AppUtil;
import org.firstinspires.ftc.robotcore.internal.system.ContinuationSynchronizer;
import org.firstinspires.ftc.robotcore.internal.system.Deadline;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Locale;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.TimeUnit;

here are the imports im using in this file

Over here we get ready for the user to press A which takes a bitmap picture and then runs a function called onNewFrame(Bitmap frame);

finally we close the camera since our work is finished!

here we save the bitmap and then immediatly delete it and then we return where the cone (since the cone is orange continued in the next page) is the getRGBvalues(Bitmap bmp);

but first we need to define our camera functions!

This is just a quality of life function to monitor memory so things go more smoothly

These are the camera utillities u can copy these in your Teamcode folder so you can use these later.

This saves the current bitmap in the control hub with a name.

Alright! now for the fun part coding the getRGBvalues(); function!

This takes in a bitmap and returns a list of rgb values, it loops through all the x and y pixels and adds the (R,G,B) values to the list and then returns it. from the lowest pixel to the highest pixel.

Last updated