OpenCV Java入門五 結合攝像頭識臉和拍照

語言: CN / TW / HK

隨著我們對環境、Mat基本使用越來越熟練、Java Swing也逐步熟悉了起來。今天我們開始進入OpenCV驅動攝像頭的幾個使用場景。

環境準備

  1.  準備好一個USB外接攝像頭,我這邊使用的有兩種,一種是普通的羅技攝像頭,一種是雙目攝像頭(將來用來做活檢);
  2. eclipse 2021-12版;
  3. JDK 11+,因為我們編寫swing要使用到Window Builder窗體設計器這個外掛。在eclipse 2021-12版裡要驅動Windows Builder窗體設計器我們必須要用JDK11及+;
  4. 使用Windows10環境程式設計。當然我們也可以使用Mac,但是Mac下如果是JAVA驅動攝像頭有一個這樣的梗:那就是直接你在eclipse裡無法直接呼叫攝像頭,它會報一個“This app has crashed because it attempted to access privacy-sensitive data without a usage description”或者是

    OpenCV: not authorized to capture video (status 0), requesting...
    OpenCV: can not spin main run loop from other thread, set OPENCV_AVFOUNDATION_SKIP_AUTH=1 to disable authorization request and perform it in your application.
    OpenCV: camera failed to properly initialize

    這樣的錯誤,這些錯誤都是因為Mac OS的許可權問題導致,它意指你在Mac下沒許可權去呼叫Mac內建的一些裝置。如果你用的是XCode寫Swift那麼你可以通過info.plist來解決此問題。但因為是eclipse裡啟動java main函式,目前在Mac OS下無法解決eclipse內執行驅動Mac外設這一類問題。如果你在Mac OS內,要執行OpenCV Java並驅動攝像頭,你必須把專案打成可執行的jar包並且在command視窗下用java -jar 這樣的命令去啟動它。在啟動時你的Mac OS會提示你給這個command視窗要授權,請點選【是】並且用指紋或者是密碼授權,然後再次在command視窗執行java -jar opencv應用,你就可以在Mac OS下使用java去驅動攝像頭了。因此這為我們的編碼除錯帶來極大的不便,這就是為什麼我們使用Windows10環境下開發opencv java的主要原因。

製作主介面

我們的主介面是一個Java Swing的JFrame應用,它長成這個樣子

整體結構介紹

我們把螢幕分成上下兩個區域,佈局使用的是1024*768,帶有點選關閉按鈕即關閉程式的自由佈局:

setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setBounds(100, 100, 1024, 768);
contentPane = new JPanel();
contentPane.setBorder(new EmptyBorder(5, 5, 5, 5));
setContentPane(contentPane);
contentPane.setLayout(null);

上部區域

我們使用一個JPanel來分組叫cameraGroup,這個JPanel也是自由佈局

JPanel cameraGroup = new JPanel();
cameraGroup.setBounds(10, 10, 988, 580);
contentPane.add(cameraGroup);
cameraGroup.setLayout(null);

然後在這個cameraGroup以左大右小,放置了兩個額外的JPanel:

  1. videoCamera
  2. videoPreview

其中的videoCamera是自定義的JPanel

protected static VideoPanel videoCamera = new VideoPanel();

它是用來顯示攝像頭開啟時不斷的把攝像頭內取到的影象“刷”到JPanel上顯示用的,程式碼如下:

package org.mk.opencv;

import java.awt.*;
import java.awt.image.BufferedImage;
import javax.swing.*;

import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Mat;

public class VideoPanel extends JPanel {

	private Image image;

	public void setImageWithMat(Mat mat) {
		image = OpenCVUtil.matToBufferedImage(mat);
		this.repaint();
	}

	public void SetImageWithImg(Image img) {
		image = img;
	}

	public Mat getMatFromImage() {
		Mat faceMat = new Mat();
		BufferedImage bi = ImageUtils.toBufferedImage(image);
		faceMat = OpenCVUtil.bufferedImageToMat(bi);
		return faceMat;
	}

	@Override
	protected void paintComponent(Graphics g) {
		super.paintComponent(g);
		if (image != null)
			g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
	}

	public static VideoPanel show(String title, int width, int height, int open) {
		JFrame frame = new JFrame(title);
		if (open == 0) {
			frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
		} else {
			frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
		}

		frame.setSize(width, height);
		frame.setBounds(0, 0, width, height);
		VideoPanel videoPanel = new VideoPanel();
		videoPanel.setSize(width, height);
		frame.setContentPane(videoPanel);
		frame.setVisible(true);
		return videoPanel;
	}
}

下部區域

下部區域我們放置了一個buttonGroup。這個buttonGroup用的是“網袋佈局”,上面放置三個按鈕。

JPanel buttonGroup = new JPanel();
buttonGroup.setBounds(65, 610, 710, 35);
contentPane.add(buttonGroup);
buttonGroup.setLayout(new GridLayout(1, 0, 0, 0));

今天我們就要實現photoButton裡的功能。

說完了佈局下面進入核心程式碼講解。

核心程式碼與知識點講解(最後會上全程式碼)

JPanel中如何顯示攝像頭的影象

JPanel這種元件一般是套在JFrame的contentPanel裡的(這是用圖形化設計器生成的JFrame自帶的一個用來“盛”其它元件的容器)。

contentPane大家可以認為是一種容器。它一般是這樣的一層關係:

JFrame(我們的主類)->contentPane->我們自己的上半部JPanel->videoCamera(JPanel)。

在Java Swing裡有一個方法叫repaint()方法,這個方法 一旦被呼叫,這個元件的“子元件”內的

protected void paintComponent(Graphics g)

都會自動被依次呼叫一遍。

因此,我們才自定義了一個JPanel叫VideoPanel,然後我們覆寫了它裡面的paintComponent方法

@Override
	protected void paintComponent(Graphics g) {
		super.paintComponent(g);
		if (image != null)
			g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
	}

這樣,我們在我們的主類“FaceRecognize”裡在通過攝像頭得到了影象後把影象通過VideoPanel裡的“setImageWithMat”方法set後,馬上呼叫FaceRecognize自自的repaint方法,然後“父”事件一路向下傳導,依次逐級把子元件進行“重新整理”-子元件的paintComponent都會被觸發一遍。

攝像頭得到影象顯示在videoCamera區域的過程就是:

  1. 不斷通過FaceRecognize類裡通過攝像頭讀到Mat物件;
  2. 把Mat物件set到VideoPanel裡;
  3. 不斷呼叫FaceRecognize裡的repaint方法迫使VideoPanel裡“重新整理”出攝像頭拍的內容;
  4. 每顯示一次,sleep(50毫秒);

為了取得良好的重新整理、連續不斷的顯示效果,你可以把上述方法套在一個“單執行緒”內。

OpenCV呼叫攝像頭

OpenCV是使用以下這個類來驅動攝像頭的。

private static VideoCapture capture = new VideoCapture();

然後開啟攝像頭,讀入攝像頭內容如下

capture.open(0);
Scalar color = new Scalar(0, 255, 0);
MatOfRect faces = new MatOfRect();
if (capture.isOpened()) {
	logger.info(">>>>>>video camera in working");
	Mat faceMat = new Mat();
	while (true) {
		capture.read(faceMat);
		if (!faceMat.empty()) {
			faceCascade.detectMultiScale(faceMat, faces);
			Rect[] facesArray = faces.toArray();
			if (facesArray.length >= 1) {
				for (int i = 0; i < facesArray.length; i++) {
					Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
					videoPanel.setImageWithMat(faceMat);
					frame.repaint();									
				}
			}
		} else {
			logger.info(">>>>>>not found anyinput");
			break;
		}
		Thread.sleep(80);
	}
}

通過上述程式碼我們可以看到我上面描述的4步。

  • capture.open(0)代表讀取你的計算機當前連線的第1個攝像頭,如果在mac上執行這一句一些mac都帶有內嵌攝像頭的,因此這一句程式碼就會驅動mac的預設內建攝像頭;
  • if(capture.isOpened()),必須要有,很多網上教程跳過了這一步檢測,導致攝像頭一直不出內容其實最後才知道是攝像頭驅動有誤或者壞了,而不是程式碼問題,最終耗費了太多的排錯時間,其實結果是換一個攝像頭就好了;
  • while(true)後跟著capture.read(faceMat),這一句就是不斷的讀取攝像頭的內容,並把攝像頭的內容讀到一個Mat物件中去;

前面說了,為了讓這個過程更“順滑”、“絲滑”,我把這個過程套到了一個單執行緒裡讓它單獨執行以不阻塞Java Swing的主介面。同時用“綠色”的方框把人臉在畫面裡“框”出來。為此我製作了一個函式如下:

public void invokeCamera(JFrame frame, VideoPanel videoPanel) {
		new Thread() {
			public void run() {
				CascadeClassifier faceCascade = new CascadeClassifier();
				faceCascade.load(cascadeFileFullPath);
				try {
					capture.open(0);
					Scalar color = new Scalar(0, 255, 0);
					MatOfRect faces = new MatOfRect();
					// Mat faceFrames = new Mat();
					if (capture.isOpened()) {
						logger.info(">>>>>>video camera in working");
						Mat faceMat = new Mat();
						while (true) {
							capture.read(faceMat);
							if (!faceMat.empty()) {
								faceCascade.detectMultiScale(faceMat, faces);
								Rect[] facesArray = faces.toArray();
								if (facesArray.length >= 1) {
									for (int i = 0; i < facesArray.length; i++) {
										Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
										videoPanel.setImageWithMat(faceMat);
										frame.repaint();
										// videoPanel.repaint();
									}
								}
							} else {
								logger.info(">>>>>>not found anyinput");
								break;
							}
							Thread.sleep(80);
						}
					}
				} catch (Exception e) {
					logger.error("invoke camera error: " + e.getMessage(), e);
				}
			}
		}.start();
	}

配合上我們的main方法就是這樣用的:

public static void main(String[] args) {
		FaceRecognize frame = new FaceRecognize();
		frame.setVisible(true);
		frame.invokeCamera(frame, videoCamera);
	}

使用攝像頭拍照

這一章節我們在 OpenCV Java入門四 認出這是“一張臉” 裡其實已經講過了,就是把一個Mat輸出到一個jpg檔案中。

在本篇章節中,我們為了做得效果好一點會做這麼幾件事:

  1. 等比例把攝像頭拿到的Mat物件縮到“videoPreview”上;
  2. 把攝像頭當前的Mat輸出到外部檔案;
  3. 把上述過程也套到了一個單執行緒裡以不阻塞主類的顯示介面;

等比例縮放圖片

  • 位於ImageUtils類,它得到一個Mat,然後轉成java.awt.Image物件;
  • 再利用Image裡的AffineTransformOp根據ratio(影象原比例)基於指定尺寸(寬:165, 高:200)的等比例縮放。再把Image轉成BufferedImage;
  • 再把BufferedImage轉回Mat給到FaceRecognize主類用來作VideoPanel的“顯示”來顯示到我們的preview區域,而preview區域其實也是用到了VideoPanel這個類來宣告的;

為此我們對photoButton進行事件程式設計

JButton photoButton = new JButton("Take Photo");
		photoButton.addActionListener(new ActionListener() {
			public void actionPerformed(ActionEvent e) {
				logger.info(">>>>>>take photo performed");
				StringBuffer photoPathStr = new StringBuffer();
				photoPathStr.append(photoPath);
				try {
					if (capture.isOpened()) {
						Mat myFace = new Mat();
						while (true) {
							capture.read(myFace);
							if (!myFace.empty()) {
								Image previewImg = ImageUtils.scale2(myFace, 165, 200, true);// 等比例縮放
								TakePhotoProcess takePhoto = new TakePhotoProcess(photoPath.toString(), myFace);
								takePhoto.start();// 照片寫盤
								videoPreview.SetImageWithImg(previewImg);// 在預覽介面裡顯示等比例縮放的照片
								videoPreview.repaint();// 讓預覽介面重新渲染
								break;
							}
						}
					}
				} catch (Exception ex) {
					logger.error(">>>>>>take photo error: " + ex.getMessage(), ex);
				}
			}
		});

TakePhotoProcess是一個單執行緒,程式碼如下:

package org.mk.opencv.sample;

import org.apache.log4j.Logger;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;

public class TakePhotoProcess extends Thread {
	private static Logger logger = Logger.getLogger(TakePhotoProcess.class);

	private String imgPath;
	private Mat faceMat;
	private final static Scalar color = new Scalar(0, 0, 255);

	public TakePhotoProcess(String imgPath, Mat faceMat) {
		this.imgPath = imgPath;
		this.faceMat = faceMat;
	}

	public void run() {
		try {
			long currentTime = System.currentTimeMillis();
			StringBuffer samplePath = new StringBuffer();
			samplePath.append(imgPath).append(currentTime).append(".jpg");
			Imgcodecs.imwrite(samplePath.toString(), faceMat);
			logger.info(">>>>>>write image into->" + samplePath.toString());

		} catch (Exception e) {
			logger.error(e.getMessage(), e);
		}
	}

}

另外兩個按鈕“trainButton”和"identifyButton"我們留到後面2個篇章裡去講,我們一步一步來,這樣大家才能夯實基礎。

最終這個FaceRecognize執行起來,然後點選photoButton後的效果如下圖所示:

全程式碼

OpenCVUtil.java

package org.mk.opencv.util;

import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.io.File;

import org.apache.log4j.Logger;
import org.opencv.core.CvType;
import org.opencv.core.Mat;


public class OpenCVUtil {
	private static Logger logger = Logger.getLogger(OpenCVUtil.class);

	public static Image matToImage(Mat matrix) {
		int type = BufferedImage.TYPE_BYTE_GRAY;
		if (matrix.channels() > 1) {
			type = BufferedImage.TYPE_3BYTE_BGR;
		}
		int bufferSize = matrix.channels() * matrix.cols() * matrix.rows();
		byte[] buffer = new byte[bufferSize];
		matrix.get(0, 0, buffer); // 獲取所有的畫素點
		BufferedImage image = new BufferedImage(matrix.cols(), matrix.rows(), type);
		final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
		System.arraycopy(buffer, 0, targetPixels, 0, buffer.length);
		return image;
	}

	public static List<String> getFilesFromFolder(String folderPath) {
		List<String> fileList = new ArrayList<String>();
		File f = new File(folderPath);
		if (f.isDirectory()) {
			File[] files = f.listFiles();
			for (File singleFile : files) {
				fileList.add(singleFile.getPath());
			}
		}
		return fileList;
	}

	public static String randomFileName() {
		StringBuffer fn = new StringBuffer();
		fn.append(System.currentTimeMillis()).append((int) (System.currentTimeMillis() % (10000 - 1) + 1))
				.append(".jpg");
		return fn.toString();
	}

	public static List<FileBean> getPicFromFolder(String rootPath) {
		List<FileBean> fList = new ArrayList<FileBean>();
		int fileNum = 0, folderNum = 0;
		File file = new File(rootPath);
		if (file.exists()) {
			LinkedList<File> list = new LinkedList<File>();
			File[] files = file.listFiles();
			for (File file2 : files) {
				if (file2.isDirectory()) {
					// logger.info(">>>>>>資料夾:" + file2.getAbsolutePath());
					list.add(file2);
					folderNum++;
				} else {
					// logger.info(">>>>>>檔案:" + file2.getAbsolutePath());
					FileBean f = new FileBean();
					String fileName = file2.getName();
					String suffix = fileName.substring(fileName.lastIndexOf(".") + 1);
					File fParent = new File(file2.getParent());
					String parentFolderName = fParent.getName();
					f.setFileFullPath(file2.getAbsolutePath());
					f.setFileType(suffix);
					f.setFolderName(parentFolderName);
					fList.add(f);
					fileNum++;
				}
			}
			File temp_file;
			while (!list.isEmpty()) {
				temp_file = list.removeFirst();
				files = temp_file.listFiles();
				for (File file2 : files) {
					if (file2.isDirectory()) {
						// System.out.println("資料夾:" + file2.getAbsolutePath());
						list.add(file2);
						folderNum++;
					} else {
						// logger.info(">>>>>>檔案:" + file2.getAbsolutePath());
						FileBean f = new FileBean();
						String fileName = file2.getName();
						String suffix = fileName.substring(fileName.lastIndexOf(".") + 1);
						File fParent = new File(file2.getParent());
						String parentFolderName = fParent.getName();
						f.setFileFullPath(file2.getAbsolutePath());
						f.setFileType(suffix);
						f.setFolderName(parentFolderName);
						fList.add(f);
						fileNum++;
					}
				}
			}
		} else {
			logger.info(">>>>>>檔案不存在!");
		}
		// logger.info(">>>>>>資料夾共有:" + folderNum + ",檔案共有:" + fileNum);
		return fList;
	}

	public static BufferedImage matToBufferedImage(Mat matrix) {
		int cols = matrix.cols();
		int rows = matrix.rows();
		int elemSize = (int) matrix.elemSize();
		byte[] data = new byte[cols * rows * elemSize];
		int type;
		matrix.get(0, 0, data);
		switch (matrix.channels()) {
		case 1:
			type = BufferedImage.TYPE_BYTE_GRAY;
			break;
		case 3:
			type = BufferedImage.TYPE_3BYTE_BGR;
			// bgr to rgb
			byte b;
			for (int i = 0; i < data.length; i = i + 3) {
				b = data[i];
				data[i] = data[i + 2];
				data[i + 2] = b;
			}
			break;
		default:
			return null;
		}
		BufferedImage image2 = new BufferedImage(cols, rows, type);
		image2.getRaster().setDataElements(0, 0, cols, rows, data);
		return image2;
	}

	public static Mat bufferedImageToMat(BufferedImage bi) {
		Mat mat = new Mat(bi.getHeight(), bi.getWidth(), CvType.CV_8UC3);
		byte[] data = ((DataBufferByte) bi.getRaster().getDataBuffer()).getData();
		mat.put(0, 0, data);
		return mat;
	}
}

ImageUtils.java

package org.mk.opencv.util;

import java.awt.Color;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.GraphicsConfiguration;
import java.awt.GraphicsDevice;
import java.awt.GraphicsEnvironment;
import java.awt.HeadlessException;
import java.awt.Image;
import java.awt.Transparency;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;

import javax.imageio.ImageIO;
import javax.swing.ImageIcon;

import org.opencv.core.Mat;

public class ImageUtils {

	/**
	 * 幾種常見的圖片格式
	 */
	public static String IMAGE_TYPE_GIF = "gif";// 圖形交換格式
	public static String IMAGE_TYPE_JPG = "jpg";// 聯合照片專家組
	public static String IMAGE_TYPE_JPEG = "jpeg";// 聯合照片專家組
	public static String IMAGE_TYPE_BMP = "bmp";// 英文Bitmap(點陣圖)的簡寫,它是Windows作業系統中的標準影象檔案格式
	public static String IMAGE_TYPE_PNG = "png";// 可移植網路圖形
	public static String IMAGE_TYPE_PSD = "psd";// Photoshop的專用格式Photoshop

	/**
	 * 縮放影象(按高度和寬度縮放)
	 * 
	 * @param srcImageFile 源影象檔案地址
	 * @param result       縮放後的影象地址
	 * @param height       縮放後的高度
	 * @param width        縮放後的寬度
	 * @param bb           比例不對時是否需要補白:true為補白; false為不補白;
	 */
	public final synchronized static Image scale2(Mat mat, int height, int width, boolean bb) throws Exception {
		// boolean flg = false;
		Image itemp = null;
		try {
			double ratio = 0.0; // 縮放比例
			// File f = new File(srcImageFile);
			// BufferedImage bi = ImageIO.read(f);
			BufferedImage bi = OpenCVUtil.matToBufferedImage(mat);
			itemp = bi.getScaledInstance(width, height, bi.SCALE_SMOOTH);
			// 計算比例
			// if ((bi.getHeight() > height) || (bi.getWidth() > width)) {
			// flg = true;
			if (bi.getHeight() > bi.getWidth()) {
				ratio = Integer.valueOf(height).doubleValue() / bi.getHeight();
			} else {
				ratio = Integer.valueOf(width).doubleValue() / bi.getWidth();
			}
			AffineTransformOp op = new AffineTransformOp(AffineTransform.getScaleInstance(ratio, ratio), null);
			itemp = op.filter(bi, null);
			// }
			if (bb) {// 補白
				BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
				Graphics2D g = image.createGraphics();
				g.setColor(Color.white);
				g.fillRect(0, 0, width, height);
				if (width == itemp.getWidth(null))
					g.drawImage(itemp, 0, (height - itemp.getHeight(null)) / 2, itemp.getWidth(null),
							itemp.getHeight(null), Color.white, null);
				else
					g.drawImage(itemp, (width - itemp.getWidth(null)) / 2, 0, itemp.getWidth(null),
							itemp.getHeight(null), Color.white, null);
				g.dispose();
				itemp = image;
			}
			// if (flg)
			// ImageIO.write((BufferedImage) itemp, "JPEG", new File(result));
		} catch (Exception e) {
			throw new Exception("scale2 error: " + e.getMessage(), e);
		}
		return itemp;
	}

	public static BufferedImage toBufferedImage(Image image) {
		if (image instanceof BufferedImage) {
			return (BufferedImage) image;
		}

		// 此程式碼確保在影象的所有畫素被載入
		image = new ImageIcon(image).getImage();

		// 如果影象有透明用這個方法
//		boolean hasAlpha = hasAlpha(image);

		// 建立一個可以在螢幕上共存的格式的bufferedimage
		BufferedImage bimage = null;
		GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
		try {
			// 確定新的緩衝影象型別的透明度
			int transparency = Transparency.OPAQUE;
			// if (hasAlpha) {
			transparency = Transparency.BITMASK;
			// }

			// 創造一個bufferedimage
			GraphicsDevice gs = ge.getDefaultScreenDevice();
			GraphicsConfiguration gc = gs.getDefaultConfiguration();
			bimage = gc.createCompatibleImage(image.getWidth(null), image.getHeight(null), transparency);
		} catch (HeadlessException e) {
			// 系統不會有一個螢幕
		}

		if (bimage == null) {
			// 建立一個預設色彩的bufferedimage
			int type = BufferedImage.TYPE_INT_RGB;
			// int type = BufferedImage.TYPE_3BYTE_BGR;//by wang
			// if (hasAlpha) {
			type = BufferedImage.TYPE_INT_ARGB;
			// }
			bimage = new BufferedImage(image.getWidth(null), image.getHeight(null), type);
		}

		// 把影象複製到bufferedimage上
		Graphics g = bimage.createGraphics();

		// 把影象畫到bufferedimage上
		g.drawImage(image, 0, 0, null);
		g.dispose();

		return bimage;
	}
}

FileBean.java

package org.mk.opencv.util;

import java.io.Serializable;

public class FileBean implements Serializable {

	private String fileFullPath;
	private String folderName;
	private String fileType;

	public String getFileType() {
		return fileType;
	}

	public void setFileType(String fileType) {
		this.fileType = fileType;
	}

	public String getFileFullPath() {
		return fileFullPath;
	}

	public void setFileFullPath(String fileFullPath) {
		this.fileFullPath = fileFullPath;
	}

	public String getFolderName() {
		return folderName;
	}

	public void setFolderName(String folderName) {
		this.folderName = folderName;
	}

}

VideoPanel.java

package org.mk.opencv;

import java.awt.*;
import java.awt.image.BufferedImage;
import javax.swing.*;

import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Mat;

public class VideoPanel extends JPanel {

	private Image image;

	public void setImageWithMat(Mat mat) {
		image = OpenCVUtil.matToBufferedImage(mat);
		this.repaint();
	}

	public void SetImageWithImg(Image img) {
		image = img;
	}

	public Mat getMatFromImage() {
		Mat faceMat = new Mat();
		BufferedImage bi = ImageUtils.toBufferedImage(image);
		faceMat = OpenCVUtil.bufferedImageToMat(bi);
		return faceMat;
	}

	@Override
	protected void paintComponent(Graphics g) {
		super.paintComponent(g);
		if (image != null)
			g.drawImage(image, 0, 0, image.getWidth(null), image.getHeight(null), this);
	}

	public static VideoPanel show(String title, int width, int height, int open) {
		JFrame frame = new JFrame(title);
		if (open == 0) {
			frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
		} else {
			frame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
		}

		frame.setSize(width, height);
		frame.setBounds(0, 0, width, height);
		VideoPanel videoPanel = new VideoPanel();
		videoPanel.setSize(width, height);
		frame.setContentPane(videoPanel);
		frame.setVisible(true);
		return videoPanel;
	}
}

TakePhotoProcess.java

package org.mk.opencv.sample;

import org.apache.log4j.Logger;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;

public class TakePhotoProcess extends Thread {
	private static Logger logger = Logger.getLogger(TakePhotoProcess.class);

	private String imgPath;
	private Mat faceMat;
	private final static Scalar color = new Scalar(0, 0, 255);

	public TakePhotoProcess(String imgPath, Mat faceMat) {
		this.imgPath = imgPath;
		this.faceMat = faceMat;
	}

	public void run() {
		try {
			long currentTime = System.currentTimeMillis();
			StringBuffer samplePath = new StringBuffer();
			samplePath.append(imgPath).append(currentTime).append(".jpg");
			Imgcodecs.imwrite(samplePath.toString(), faceMat);
			logger.info(">>>>>>write image into->" + samplePath.toString());

		} catch (Exception e) {
			logger.error(e.getMessage(), e);
		}
	}

}

FaceRecognize.java(核心主類)

package org.mk.opencv.sample;

import java.awt.EventQueue;

import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.border.EmptyBorder;

import org.apache.log4j.Logger;
import org.mk.opencv.VideoPanel;
import org.mk.opencv.face.TakePhotoProcess;
import org.mk.opencv.util.ImageUtils;
import org.mk.opencv.util.OpenCVUtil;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.videoio.VideoCapture;

import javax.swing.border.BevelBorder;
import javax.swing.JLabel;
import javax.swing.SwingConstants;
import java.awt.GridLayout;
import java.awt.Image;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;

import javax.swing.JButton;

public class FaceRecognize extends JFrame {
	static {
		System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
	}
	private static Logger logger = Logger.getLogger(FaceRecognize.class);
	private static final String cascadeFileFullPath = "D:\\opencvinstall\\build\\install\\etc\\lbpcascades\\lbpcascade_frontalface.xml";
	private static final String photoPath = "D:\\opencv-demo\\face\\";
	private JPanel contentPane;
	protected static VideoPanel videoCamera = new VideoPanel();
	private static final Size faceSize = new Size(165, 200);
	private static VideoCapture capture = new VideoCapture();

	/**
	 * Launch the application.
	 */
	public static void main(String[] args) {
		FaceRecognize frame = new FaceRecognize();
		frame.setVisible(true);
		frame.invokeCamera(frame, videoCamera);
	}

	public void invokeCamera(JFrame frame, VideoPanel videoPanel) {
		new Thread() {
			public void run() {
				CascadeClassifier faceCascade = new CascadeClassifier();
				faceCascade.load(cascadeFileFullPath);
				try {
					capture.open(0);
					Scalar color = new Scalar(0, 255, 0);
					MatOfRect faces = new MatOfRect();
					// Mat faceFrames = new Mat();
					if (capture.isOpened()) {
						logger.info(">>>>>>video camera in working");
						Mat faceMat = new Mat();
						while (true) {
							capture.read(faceMat);
							if (!faceMat.empty()) {
								faceCascade.detectMultiScale(faceMat, faces);
								Rect[] facesArray = faces.toArray();
								if (facesArray.length >= 1) {
									for (int i = 0; i < facesArray.length; i++) {
										Imgproc.rectangle(faceMat, facesArray[i].tl(), facesArray[i].br(), color, 2);
										videoPanel.setImageWithMat(faceMat);
										frame.repaint();
										// videoPanel.repaint();
									}
								}
							} else {
								logger.info(">>>>>>not found anyinput");
								break;
							}
							Thread.sleep(80);
						}
					}
				} catch (Exception e) {
					logger.error("invoke camera error: " + e.getMessage(), e);
				}
			}
		}.start();
	}

	/**
	 * Create the frame.
	 */

	public FaceRecognize() {

		setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
		setBounds(100, 100, 1024, 768);
		contentPane = new JPanel();
		contentPane.setBorder(new EmptyBorder(5, 5, 5, 5));
		setContentPane(contentPane);
		contentPane.setLayout(null);

		JPanel cameraGroup = new JPanel();
		cameraGroup.setBounds(10, 10, 988, 580);
		contentPane.add(cameraGroup);
		cameraGroup.setLayout(null);

		JLabel videoDescriptionLabel = new JLabel("Video");
		videoDescriptionLabel.setHorizontalAlignment(SwingConstants.CENTER);
		videoDescriptionLabel.setBounds(0, 10, 804, 23);
		cameraGroup.add(videoDescriptionLabel);

		videoCamera.setBorder(new BevelBorder(BevelBorder.LOWERED, null, null, null, null));
		videoCamera.setBounds(10, 43, 794, 527);
		cameraGroup.add(videoCamera);

		// JPanel videoPreview = new JPanel();
		VideoPanel videoPreview = new VideoPanel();
		videoPreview.setBorder(new BevelBorder(BevelBorder.LOWERED, null, null, null, null));
		videoPreview.setBounds(807, 359, 171, 211);
		cameraGroup.add(videoPreview);

		JLabel lblNewLabel = new JLabel("Preview");
		lblNewLabel.setHorizontalAlignment(SwingConstants.CENTER);
		lblNewLabel.setBounds(807, 307, 171, 42);
		cameraGroup.add(lblNewLabel);

		JPanel buttonGroup = new JPanel();
		buttonGroup.setBounds(65, 610, 710, 35);
		contentPane.add(buttonGroup);
		buttonGroup.setLayout(new GridLayout(1, 0, 0, 0));

		JButton photoButton = new JButton("Take Photo");
		photoButton.addActionListener(new ActionListener() {
			public void actionPerformed(ActionEvent e) {
				logger.info(">>>>>>take photo performed");
				StringBuffer photoPathStr = new StringBuffer();
				photoPathStr.append(photoPath);
				try {
					if (capture.isOpened()) {
						Mat myFace = new Mat();
						while (true) {
							capture.read(myFace);
							if (!myFace.empty()) {
								Image previewImg = ImageUtils.scale2(myFace, 165, 200, true);// 等比例縮放
								TakePhotoProcess takePhoto = new TakePhotoProcess(photoPath.toString(), myFace);
								takePhoto.start();// 照片寫盤
								videoPreview.SetImageWithImg(previewImg);// 在預覽介面裡顯示等比例縮放的照片
								videoPreview.repaint();// 讓預覽介面重新渲染
								break;
							}
						}
					}
				} catch (Exception ex) {
					logger.error(">>>>>>take photo error: " + ex.getMessage(), ex);
				}
			}
		});
		buttonGroup.add(photoButton);

		JButton trainButton = new JButton("Train");
		buttonGroup.add(trainButton);

		JButton identifyButton = new JButton("Identify");
		buttonGroup.add(identifyButton);
	}
}