Hiw to Know if You Need Stich S on Your Head

bryce_result_02

In today'due south blog post, I'll demonstrate how to perform paradigm stitching and panorama construction using Python and OpenCV. Given 2 images, we'll "stitch" them together to create a unproblematic panorama, equally seen in the example above.

To construct our image panorama, nosotros'll utilise computer vision and paradigm processing techniques such as: keypoint detection and local invariant descriptors; keypoint matching; RANSAC; and perspective warping.

Since there are major differences in how OpenCV 2.iv.X and OpenCV 3.X handle keypoint detection and local invariant descriptors (such equally SIFT and SURF), I've taken special care to provide code that is compatible with both versions (provided that you compiled OpenCV iii with opencv_contrib support, of grade).

In future blog posts we'll extend our panorama stitching code to work with multiple images rather than just 2.

Read on to find out how panorama stitching with OpenCV is done.

Looking for the source code to this postal service?

Jump Right To The Downloads Section

OpenCV panorama stitching

Our panorama stitching algorithm consists of iv steps:

  • Footstep #1: Detect keypoints (Dog, Harris, etc.) and excerpt local invariant descriptors (SIFT, SURF, etc.) from the 2 input images.
  • Step #2: Match the descriptors between the two images.
  • Step #3: Employ the RANSAC algorithm to estimate a homography matrix using our matched feature vectors.
  • Step #4: Employ a warping transformation using the homography matrix obtained from Footstep #3.

We'll encapsulate all iv of these steps inside panorama.py , where we'll define a Stitcher class used to construct our panoramas.

The Stitcher class will rely on the imutils Python parcel, then if you don't already have information technology installed on your organisation, yous'll desire to become ahead and do that now:

$ pip install imutils          

Let's get alee and go started by reviewing panorama.py :

# import the necessary packages import numpy every bit np import imutils import cv2  course Stitcher: 	def __init__(cocky): 		# determine if we are using OpenCV v3.X 		cocky.isv3 = imutils.is_cv3(or_better=True)          

We start off on Lines ii-iv past importing our necessary packages. We'll exist using NumPy for matrix/array operations, imutils for a gear up of OpenCV convenience methods, and finally cv2 for our OpenCV bindings.

From there, nosotros define the Stitcher grade on Line half dozen. The constructor to Stitcher simply checks which version of OpenCV we are using past making a call to the is_cv3 method. Since in that location are major differences in how OpenCV ii.4 and OpenCV three handle keypoint detection and local invariant descriptors, it's important that we make up one's mind the version of OpenCV that we are using.

Next upwards, let's start working on the sew together method:

            def stitch(self, images, ratio=0.75, reprojThresh=4.0, 		showMatches=False): 		# unpack the images, so detect keypoints and extract 		# local invariant descriptors from them 		(imageB, imageA) = images 		(kpsA, featuresA) = cocky.detectAndDescribe(imageA) 		(kpsB, featuresB) = self.detectAndDescribe(imageB)  		# match features between the two images 		M = self.matchKeypoints(kpsA, kpsB, 			featuresA, featuresB, ratio, reprojThresh)  		# if the match is None, and then there aren't plenty matched 		# keypoints to create a panorama 		if M is None: 			return None          

The stitch method requires just a single parameter, images , which is the list of (2) images that we are going to stitch together to form the panorama.

We can too optionally supply ratio , used for David Lowe's ratio test when matching features (more on this ratio examination later in the tutorial), reprojThresh which is the maximum pixel "wiggle room" allowed by the RANSAC algorithm, and finally showMatches , a boolean used to bespeak if the keypoint matches should be visualized or not.

Line fifteen unpacks the images listing (which over again, we assume to contain just two images). The ordering to the images listing is important: nosotros expect images to be supplied in left-to-right order. If images are not supplied in this order, then our code will nonetheless run — but our output panorama will but contain one image, non both.

Once we have unpacked the images listing, we make a call to the detectAndDescribe method on Lines xvi and 17. This method simply detects keypoints and extracts local invariant descriptors (i.e., SIFT) from the 2 images.

Given the keypoints and features, we use matchKeypoints (Lines 20 and 21) to match the features in the two images. We'll ascertain this method later in the lesson.

If the returned matches Chiliad are None , and so not enough keypoints were matched to create a panorama, so we simply return to the calling function (Lines 25 and 26).

Otherwise, we are now ready to utilize the perspective transform:

            # otherwise, use a perspective warp to stitch the images 		# together 		(matches, H, status) = One thousand 		result = cv2.warpPerspective(imageA, H, 			(imageA.shape[1] + imageB.shape[1], imageA.shape[0])) 		result[0:imageB.shape[0], 0:imageB.shape[one]] = imageB  		# check to encounter if the keypoint matches should be visualized 		if showMatches: 			vis = cocky.drawMatches(imageA, imageB, kpsA, kpsB, matches, 				condition)  			# return a tuple of the stitched epitome and the 			# visualization 			return (result, vis)  		# return the stitched image 		return result          

Provided that M is non None , we unpack the tuple on Line 30, giving us a list of keypoint matches , the homography matrix H derived from the RANSAC algorithm, and finally status , a list of indexes to indicate which keypoints in matches were successfully spatially verified using RANSAC.

Given our homography matrix H , we are now ready to stitch the two images together. First, we make a telephone call to cv2.warpPerspective which requires three arguments: the paradigm nosotros desire to warp (in this case, the correct paradigm), the iii 10 iii transformation matrix (H ), and finally the shape out of the output image. Nosotros derive the shape out of the output epitome past taking the sum of the widths of both images and then using the top of the second image.

Line thirty makes a check to see if we should visualize the keypoint matches, and if and then, nosotros make a call to drawMatches and render a tuple of both the panorama and visualization to the calling method (Lines 37-42).

Otherwise, we simply returned the stitched image (Line 45).

At present that the run up method has been defined, let's await into some of the helper methods that it calls. Nosotros'll start with detectAndDescribe :

            def detectAndDescribe(self, paradigm): 		# convert the image to grayscale 		gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  		# check to run into if nosotros are using OpenCV 3.X 		if self.isv3: 			# detect and extract features from the image 			descriptor = cv2.xfeatures2d.SIFT_create() 			(kps, features) = descriptor.detectAndCompute(epitome, None)  		# otherwise, we are using OpenCV 2.4.10 		else: 			# detect keypoints in the epitome 			detector = cv2.FeatureDetector_create("SIFT") 			kps = detector.detect(gray)  			# extract features from the epitome 			extractor = cv2.DescriptorExtractor_create("SIFT") 			(kps, features) = extractor.compute(gray, kps)  		# convert the keypoints from KeyPoint objects to NumPy 		# arrays 		kps = np.float32([kp.pt for kp in kps])  		# return a tuple of keypoints and features 		return (kps, features)          

As the name suggests, the detectAndDescribe method accepts an image, and so detects keypoints and extracts local invariant descriptors. In our implementation we use the Difference of Gaussian (DoG) keypoint detector and the SIFT feature extractor.

On Line 52 we bank check to encounter if we are using OpenCV 3.10. If we are, then we use the cv2.xfeatures2d.SIFT_create function to instantiate both our DoG keypoint detector and SIFT feature extractor. A call to detectAndCompute handles extracting the keypoints and features (Lines 54 and 55).

It'due south of import to note that yous must take compiled OpenCV 3.X with opencv_contrib back up enabled. If you did not, y'all'll go an fault such as AttributeError: 'module' object has no attribute 'xfeatures2d' . If that'due south the instance, head over to my OpenCV 3 tutorials page where I detail how to install OpenCV iii with opencv_contrib support enabled for a variety of operating systems and Python versions.

Lines 58-65 handle if we are using OpenCV 2.4. The cv2.FeatureDetector_create role instantiates our keypoint detector (Dog). A telephone call to observe returns our set of keypoints.

From there, we demand to initialize cv2.DescriptorExtractor_create using the SIFT keyword to setup our SIFT feature extractor . Calling the compute method of the extractor returns a ready of feature vectors which quantify the region surrounding each of the detected keypoints in the image.

Finally, our keypoints are converted from KeyPoint objects to a NumPy array (Line 69) and returned to the calling method (Line 72).

Next up, let'due south wait at the matchKeypoints method:

            def matchKeypoints(cocky, kpsA, kpsB, featuresA, featuresB, 		ratio, reprojThresh): 		# compute the raw matches and initialize the listing of actual 		# matches 		matcher = cv2.DescriptorMatcher_create("BruteForce") 		rawMatches = matcher.knnMatch(featuresA, featuresB, ii) 		matches = []  		# loop over the raw matches 		for yard in rawMatches: 			# ensure the altitude is within a certain ratio of each 			# other (i.e. Lowe's ratio exam) 			if len(m) == 2 and m[0].distance < m[i].distance * ratio: 				matches.append((m[0].trainIdx, m[0].queryIdx))          

The matchKeypoints function requires four arguments: the keypoints and feature vectors associated with the beginning paradigm, followed past the keypoints and feature vectors associated with the second image. David Lowe'south ratio test variable and RANSAC re-project threshold are besides exist supplied.

Matching features together is actually a fairly straightforward process. Nosotros simply loop over the descriptors from both images, compute the distances, and detect the smallest distance for each pair of descriptors. Since this is a very common exercise in estimator vision, OpenCV has a born office called cv2.DescriptorMatcher_create that constructs the feature matcher for us. The BruteForce value indicates that we are going to exhaustively compute the Euclidean distance between all feature vectors from both images and find the pairs of descriptors that accept the smallest altitude.

A call to knnMatch on Line 79 performs 1000-NN matching between the ii feature vector sets using yard=2 (indicating the top ii matches for each feature vector are returned).

The reason we want the top 2 matches rather than simply the pinnacle one match is because we need to apply David Lowe'south ratio examination for fake-positive match pruning.

Again, Line 79 computes the rawMatches for each pair of descriptors — just there is a hazard that some of these pairs are false positives, meaning that the image patches are not actually true matches. In an endeavor to clip these faux-positive matches, we tin can loop over each of the rawMatches individually (Line 83) and apply Lowe'south ratio examination, which is used to determine high-quality feature matches. Typical values for Lowe'southward ratio are normally in the range [0.seven, 0.8].

In one case we take obtained the matches using Lowe'due south ratio test, we can compute the homography between the 2 sets of keypoints:

            # computing a homography requires at least 4 matches 		if len(matches) > 4: 			# construct the two sets of points 			ptsA = np.float32([kpsA[i] for (_, i) in matches]) 			ptsB = np.float32([kpsB[i] for (i, _) in matches])  			# compute the homography betwixt the two sets of points 			(H, status) = cv2.findHomography(ptsA, ptsB, cv2.RANSAC, 				reprojThresh)  			# render the matches along with the homograpy matrix 			# and status of each matched bespeak 			return (matches, H, condition)  		# otherwise, no homograpy could be computed 		return None          

Computing a homography betwixt two sets of points requires at a bare minimum an initial ready of iv matches. For a more than reliable homography estimation, we should have essentially more than just four matched points.

Finally, the last method in our Stitcher method, drawMatches is used to visualize keypoint correspondences between two images:

            def drawMatches(self, imageA, imageB, kpsA, kpsB, matches, status): 		# initialize the output visualization image 		(hA, wA) = imageA.shape[:2] 		(hB, wB) = imageB.shape[:2] 		vis = np.zeros((max(hA, hB), wA + wB, three), dtype="uint8") 		vis[0:hA, 0:wA] = imageA 		vis[0:hB, wA:] = imageB  		# loop over the matches 		for ((trainIdx, queryIdx), s) in aught(matches, condition): 			# only process the match if the keypoint was successfully 			# matched 			if s == 1: 				# depict the lucifer 				ptA = (int(kpsA[queryIdx][0]), int(kpsA[queryIdx][1])) 				ptB = (int(kpsB[trainIdx][0]) + wA, int(kpsB[trainIdx][1])) 				cv2.line(vis, ptA, ptB, (0, 255, 0), ane)  		# return the visualization 		return vis          

This method requires that we laissez passer in the two original images, the set of keypoints associated with each epitome, the initial matches after applying Lowe'due south ratio test, and finally the condition list provided by the homography calculation. Using these variables, we tin can visualize the "inlier" keypoints past drawing a straight line from keypoint N in the first epitome to keypoint M in the 2d image.

Now that we accept our Stitcher class defined, let's move on to creating the stitch.py commuter script:

# import the necessary packages from pyimagesearch.panorama import Stitcher import argparse import imutils import cv2  # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-f", "--first", required=True, 	help="path to the showtime image") ap.add_argument("-s", "--2nd", required=True, 	help="path to the second paradigm") args = vars(ap.parse_args())          

We start off by importing our required packages on Lines 2-5. Notice how nosotros've placed the panorama.py and Stitcher class into the pyimagesearch module just to proceed our code tidy.

Note: If you are post-obit along with this mail and having trouble organizing your code, please exist sure to download the source code using the form at the bottom of this post. The .nothing of the code download will run out of the box without any errors.

From there, Lines 8-14 parse our command line arguments: --beginning , which is the path to the showtime image in our panorama (the left-almost image), and --second , the path to the second image in the panorama (the right-nearly paradigm).

Remember, these image paths demand to be suppled in left-to-right order!

The rest of the sew together.py driver script only handles loading our images, resizing them (then they can fit on our screen), and amalgam our panorama:

# load the 2 images and resize them to take a width of 400 pixels # (for faster processing) imageA = cv2.imread(args["offset"]) imageB = cv2.imread(args["second"]) imageA = imutils.resize(imageA, width=400) imageB = imutils.resize(imageB, width=400)  # stitch the images together to create a panorama stitcher = Stitcher() (issue, vis) = stitcher.stitch([imageA, imageB], showMatches=True)  # show the images cv2.imshow("Image A", imageA) cv2.imshow("Image B", imageB) cv2.imshow("Keypoint Matches", vis) cv2.imshow("Result", issue) cv2.waitKey(0)          

Once our images are loaded and resized, we initialize our Stitcher grade on Line 23. Nosotros then telephone call the sew together method, passing in our two images (again, in left-to-right order) and indicate that we would similar to visualize the keypoint matches between the 2 images.

Finally, Lines 27-31 display our output images to our screen.

Panorama stitching results

In mid-2014 I took a trip out to Arizona and Utah to enjoy the national parks. Forth the way I stopped at many locations, including Bryce Canyon, Chiliad Canyon, and Sedona. Given that these areas contain beautiful scenic views, I naturally took a bunch of photos — some of which are perfect for constructing panoramas. I've included a sample of these images in today's blog to demonstrate panorama stitching.

All that said, let's give our OpenCV panorama stitcher a endeavor. Open up up a concluding and issue the following command:

$ python stitch.py --showtime images/bryce_left_01.png \ 	--second images/bryce_right_01.png          
Figure 1: (Top) The two input images from Bryce canyon (in left-to-right order). (Bottom) The matched keypoint correspondences between the two images.
Figure ane: (Summit) The two input images from Bryce canyon (in left-to-right order). (Bottom) The matched keypoint correspondences between the two images.

At the top of this figure, we can encounter two input images (resized to fit on my screen, the raw .jpg files are a much higher resolution). And on the lesser, we can see the matched keypoints between the 2 images.

Using these matched keypoints, we can apply a perspective transform and obtain the final panorama:

Figure 2: Constructing a panorama from our two input images.
Figure ii: Constructing a panorama from our two input images.

As we can encounter, the two images accept been successfully stitched together!

Annotation: On many of these instance images, yous'll often run into a visible "seam" running through the center of the stitched images. This is because I shot many of photos using either my iPhone or a digital camera with autofocus turned on, thus the focus is slightly different between each shot . Image stitching and panorama construction work best when you employ the same focus for every photograph. I never intended to use these holiday photos for image stitching, otherwise I would have taken care to adjust the camera sensors. In either case, just keep in mind the seam is due to varying sensor properties at the time I took the photo and was not intentional.

Let's requite another set of images a try:

$ python sew together.py --beginning images/bryce_left_02.png \ 	--2nd images/bryce_right_02.png          
Figure 3: Another successful application of image stitching with OpenCV.
Effigy 3: Another successful application of image stitching with OpenCV.

Again, our Stitcher grade was able to construct a panorama from the 2 input images.

Now, let'southward movement on to the Grand Canyon:

$ python stitch.py --commencement images/grand_canyon_left_01.png \ 	--second images/grand_canyon_right_01.png          
Figure 4: Applying image stitching and panorama construction using OpenCV.
Figure 4: Applying prototype stitching and panorama construction using OpenCV.

In the in a higher place input images nosotros can see heavy overlap between the 2 input images. The main improver to the panorama is towards the right side of the stitched images where we can see more of the "ledge" is added to the output.

Here'south some other example from the Grand Coulee:

$ python run up.py --first images/grand_canyon_left_02.png \ 	--second images/grand_canyon_right_02.png          
Figure 5: Using image stitching to build a panorama using OpenCV and Python.
Figure v: Using prototype stitching to build a panorama using OpenCV and Python.

From this example, we can see that more than of the huge expanse of the Yard Canyon has been added to the panorama.

Finally, let's wrap up this blog post with an case image stitching from Sedona, AZ:

$ python sew.py --beginning images/sedona_left_01.png \ 	--second images/sedona_right_01.png          
Figure 6: One final example of applying image stitching.
Figure 6: One final case of applying image stitching.

Personally, I find the cerise stone country of Sedona to be i of the virtually beautiful areas I've ever visited. If you e'er have a chance, definitely stop by — you won't exist disappointed.

So there you have it, image stitching and panorama construction using Python and OpenCV!

What's next? I recommend PyImageSearch University.

Course information:
35+ total classes • 39h 44m video • Final updated: February 2022
★★★★★ 4.84 (128 Ratings) • three,000+ Students Enrolled

I strongly believe that if you had the right teacher you could primary computer vision and deep learning.

Do y'all think learning calculator vision and deep learning has to be fourth dimension-consuming, overwhelming, and complicated? Or has to involve circuitous mathematics and equations? Or requires a degree in computer science?

That'south non the example.

All you need to principal reckoner vision and deep learning is for someone to explain things to yous in uncomplicated, intuitive terms. And that'due south exactly what I exercise. My mission is to change education and how complex Bogus Intelligence topics are taught.

If you lot're serious about learning computer vision, your next stop should be PyImageSearch University, the nigh comprehensive computer vision, deep learning, and OpenCV course online today. Hither you lot'll acquire how to successfully and confidently use reckoner vision to your work, research, and projects. Join me in computer vision mastery.

Inside PyImageSearch Academy you'll notice:

  • 35+ courses on essential computer vision, deep learning, and OpenCV topics
  • ✓ 35+ Certificates of Completion
  • 39h 44m on-demand video
  • Brand new courses released every month , ensuring you tin keep up with land-of-the-art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all code examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • ✓ Access to centralized code repos for all 500+ tutorials on PyImageSearch
  • Easy one-click downloads for lawmaking, datasets, pre-trained models, etc.
  • ✓ Access on mobile, laptop, desktop, etc.

Click here to join PyImageSearch University

Summary

In this blog post we learned how to perform image stitching and panorama structure using OpenCV. Source code was provided for image stitching forboth OpenCV two.four and OpenCV 3.

Our prototype stitching algorithm requires four steps: (one) detecting keypoints and extracting local invariant descriptors; (ii) matching descriptors between images; (iii) applying RANSAC to guess the homography matrix; and (iv) applying a warping transformation using the homography matrix.

While simple, this algorithm works well in practice when constructing panoramas for 2 images. In a future blog post, nosotros'll review how to construct panoramas and perform image stitchingfor more than two images.

Anyhow, I hope yous enjoyed this post! Exist sure to utilise the form below to download the source lawmaking and give it a effort.

Download the Source Code and Gratuitous 17-page Resources Guide

Enter your email accost below to get a .naught of the lawmaking and a Free 17-page Resources Guide on Estimator Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help y'all master CV and DL!

curielhiculoveirs.blogspot.com

Source: https://pyimagesearch.com/2016/01/11/opencv-panorama-stitching/

0 Response to "Hiw to Know if You Need Stich S on Your Head"

Publicar un comentario

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel