26 Apr

Using the Cognex In-Sight 7000 Gen II Status Lights

Cognex 7000 Gen II Status LightAt KTM Research, we have been testing the recently release Cognex In-Sight 7802 Gen II camera.  One of the most interesting features of the In-Sight 7000 Gen II series cameras is the new on-camera illuminator.  This is a major new feature that opens up a lot of new possibilities like the SurfaceFX tool that we looked at more in depth on an earlier blog post.

One of the less touted features, but one that we think will be really useful is the 360 degree status light! Cognex has taken one of the two user-controllable status lights and redesigned it to encircle the entire camera.  This makes the status light visible from any direction.  The second user-controllable status light (red) remains located on the interface panel with the other lights and buttons.

Controlling the status lights

Cognex documentation could do a better job informing users of the user-controllable status lights in general, and how to use them specifically.  If this is something you have not done before, it’s pretty straight forward.  If you have ever used discrete outputs on the In-Sight line of products, this will look very familiar to you as it is essentially the same  process.

It’s always best to use a relatively up to date version of In-Sight Explorer.  I am currently using In-Sight Explorer version 5.4.0 for this example.  You can always find the most up to date version from Cognex’s In-Sight Support Page here.

Open In-Sight Explorer, log into your camera, and open a new spreadsheet view.  If you are in EasyBuilder mode, switch to spreadsheet view by pressing the Ctrl+Shift+V keys.  Once in spreadsheet view, you are going to setup two cells.

In cell A1, type “CheckBox” without the parenthesis and press enter.  You will get a popup window like the one shown below.  If you copy/paste it, make sure that the leading tick mark (‘) is deleted or else it will be treated like a comment.  Type “Light” into the Name field as shown and click the OK button.

Cognex In-Sight Explorer CheckBox Window

In cell B1, type “Button” and press enter.  You will get the button configuration popup window like the one shown below.  Type “Set Light” into the name field and press the OK button.

Cognex In-Sight Button Window

In the next cell to the right, C1, type “WriteDiscrete” and press enter.  You will get the WriteDiscrete popup window like the one shown below.  In this window, we are going to change several of the fields.  For the event and value fields, you can either type in the data shown in the image, or you can double-click on the name portion of the window for the field and then double-click on the spreadsheet cell you want to set it to.  For the event field, this should be the button created in cell B1.  For the value field, this should be the check box created in cell A1.  Set the start bit field to 4 for the green wrap-around status light.  When you have your box setup like this, press the OK button.  (Note: you can come back to this step and set the start bit to 5 to toggle the red user-controllable light.)

Cognex In-Sight WriteDiscrete Window

Now you should have a spreadsheet that looks like mine below.  When you are ready to test this, put the camera into “online” mode by clicking on the online/offline button in the top menu, or by pressing Ctrl+F8.  When the camera is online, you should see the green box with “Online” as shown at the bottom right of my screenshot below.

Cognex In-Sight Example Program Window

 

With the camera online, you can set the light status with the check box.  To make the light change, you will need to trigger the DiscreteWrite to update the bit registers.  In this example that is done by pressing the set light button.  Go ahead and play with checking or unchecking the check box and pressing the button.  You should be seeing the light turn on and off.  If not, make sure that you are in online mode.

You now have the basic framework to start exploring the user-controllable lighting.  By using different value cells that are automatically calculated, and events triggered by your software instead of the button, the use options are almost endless.

About KTM Research:

KTM Research is an engineering firm that specializes in industrial machine vision systems for quality control and vision-guided robotics.  Formed in 2009, we are located in Tualatin, Oregon.  We serve industries in the fields of advanced manufacturing, consumer electronics, bio-tech, food and beverage, research, and logistics.  Our systems have been successfully used by customers across North America and Asia.

Our goal at KTM Research is to be the first call you make when faced with a vision challenge.  Our team of engineers view themselves as an extension of your organization and strive to be your trusted vision partner.  Our success is our clients’ success.  Our collaborative approach to projects with our conservative and robust design process allows KTM Research to successfully complete projects that many others cannot.

Contact KTM Research at info@ktmresearch.com for more information on our vision solutions.

 

24 Apr

Cognex SurfaceFX Feature Extract Tool

Cognex SurfaceFX Featured Image

Cognex’s SurfaceFX tool offers a new way to inspect physical features that are engraved, embossed, stamped, etc.  It can find physical defects such as scratches, dents, puncture holes, etc.   SurfaceFX outputs an image that any of Cognex’s existing tools can use to analyze.  Cognex SurfaceFX opens up many possibilities for inspections that would otherwise be difficult or impossible.  We have included some sample imagery below captured at KTM Research’s facilities to demonstrate the capabilities of SurfaceFX.

SurfaceFX Feature Extract Tool

Cognex In-Sight 7802The Cognex SurfaceFX capability are available on the In-Sight 7000 Gen II series of cameras.  The feature is enabled by the new on-camera illuminator that offers individual control of four lighting quadrants.  By capturing four separate images each illuminated by one quadrant of the illuminator, a reconstructed image similar to photometric stereo can be achieved.  While not true photometric stereo, as available with MVTec Halcon’s photometric stereo feature, the Cognex SurfaceFX feature extract tool gives a similar effect with much less work.

Independently controlled quadrant illumination

While the new on-camera illuminator’s independent control of the four quadrants enables the SurfaceFX tool, it also opens up other new imaging possibilities that would have previously required complex off-camera hardware.  Below is an example of a ping pong ball being imaged with each of the four lighting quadrants (left, top, bottom, right) demonstrating the directionality of the light. Cognex In-Sight 7000 Gen II Quadrant Light

The SurfaceFX tool takes these four images and generates the SurfaceFX image.  The SurfaceFX image can then have any of the other Cognex filters, tools, or scripts applied.  The new on-camera illuminator with independent quadrant control opens up many possibilities for novel inspection methodologies.

SurfaceFX Application Examples

The most direct and simple use of Cognex SurfaceFX is to find physical defects (bumps, holes, etc.) in a smooth surface.  The example below shows a coffee creamer pouch with a small hole in the seal.  With traditional illumination (left), this would very difficult for machine vision software to successfully identify.  With Cognex SurfaceFX (right), the physical features, including the hole, are obvious while printed text/graphics practically vanish.

Cognex SurfaceFX Example Creamer Hole Defect

Below is an example of a foil packet with a lot code and expiration date stamped into the material.  The shiny surface would normally make this inspection extremely difficult, even with off-camera lighting.  SurfaceFX and the on-camera quadrant illumination make the stamped features stand out with enough contrast for simple inspection.

Cognex SurfaceFX Example Pill Pouch Embossed

The example below is of an debossed lot code marking in an injection molded part.  The original image (left) shows that with lighting setup to eliminate glare on the marking, low contrast makes the inspection very difficult.  With SurfaceFX (right), the marking stands out in high contrast to the background and allows for easy OCR.

Cognex SurfaceFX Example Power Supply Marking

The examples above highlight features with sharp edges.  Features with sharp edges work especially well with SurfaceFX but SurfaceFX can work equally as well with softer features.  In the example below, the embossed information on the bottom of a coffee creamer is difficult to impossible to read (left) using machine vision.  The SurfaceFX image (right) does an excellent job extracting the physical features with enough contrast for analysis.

Cognex SurfaceFX Example Creamer Bottom

The Solo cup lid below shows a difficult, but potentially possible text extraction application using diffuse lighting.  This type of inspection is made significantly easier using SurfaceFX to increase contrast of the physical features.  Like the coffee creamer example above, the lid has softer, rounded features, but SurfaceFX is still easily able to extract the features with sufficient contrast for analysis.  Again, this example is using a stock In-Sight 7000 Gen II sensor with only the on-camera illumination.

Cognex SurfaceFX Example Solo Lid

One key area that Cognex SurfaceFX excels at with the on-camera lighting is imaging features on shiny or glossy surfaces.  This was an almost impossible task before, but SurfaceFX easily extracts psychical features regardless of glare.  The example below of a Leatherman tool would have been impossible with on-camera lighting without SurfaceFX.
Cognex SurfaceFX Example Leatherman

In addition to imaging shiny surfaces, Cognex SurfaceFX can extract very fine details that would otherwise be impossible to discern.  The vertical lines in this Taiwanese coin provide an excellent example of this capability. (Click for full size image.)
Cognex SurfaceFX Example New Taiwan Dollar

Coins provide an excellent example of everything Cognex SurfaceFX excels at.  They are difficult to illuminate and have physical features that even with off-camera illumination are still difficult to analyze.  Below we have included the full-resolution example of a newer United States penny.

Cognex SurfaceFX Example Penny

Like the penny above, the quarter below also demonstrates SurfaceFX’s ability to extract physical features.  In this example, all the text around the edge is debossed instead of embossed like is found on many coins.
Cognex SurfaceFX Example Quarter

 

Using Cognex SurfaceFX

In our opinion, the Cognex In-Sight 7000 Gen II is one of the best cameras that Cognex has produced to date.  The on-camera illuminator is significantly better than anything that Cognex has released before.  The fact that the new illuminator with independent quadrant control allows for the SurfaceFX tool is just one more reason to strongly consider this camera series for your next project.

There are some caveats to consider with SurfaceFX.  The tool relies on illuminating the part from four different sides.  The on-camera illuminator is larger than before, but will work best on objects smaller than three to four inches (75-100 mm).  The camera also needs to be closer to the object than we are used to with Cognex’s other cameras, which can cause distortion issues.

For example, we can show you extreme detail in the back of the below penny.  To the human eye, the Abraham Lincoln statue inside the Lincoln Memorial is almost invisible, but is easily extracted with SurfaceFX.

Cognex SurfaceFX Penny Close Up

The image below of three pennies demonstrates the reduced field of view that is required to obtain the level of detail on this scale of the middle penny.  Note that in this setup, the front face of the camera/lighting is approximately two inches (50 mm) from the coin.  The image below shows the SurfaceFX effect dropping out on the neighboring pennies.  Click the image for the uncropped full-resolution version.Cognex SurfaceFX Penny FOV Preview

The image of the three pennies above demonstrates an extreme example of the reduced usable field of view.  In most applications, the FOV reduction will not be as extreme, but should be taken into account when designing a system using the new Cognex 7000 Series Gen II cameras.

If you have an application that you believe Cognex SurfaceFX would be a good solution for, feel free to contact KTM Research for more information, or to setup a demo.  KTM Research specializes in machine vision and can make sure your vision project succeed.

 

About KTM Research

KTM Research is an engineering firm that specializes in industrial machine vision systems for quality control and vision-guided robotics.  Formed in 2009, we are located in Tualatin, Oregon.  We serve industries in the fields of advanced manufacturing, consumer electronics, bio-tech, food and beverage, research, and logistics.  Our systems have been successfully used by customers across North America and Asia.

Our goal at KTM Research is to be the first call you make when faced with a vision challenge.  Our team of engineers view themselves as an extension of your organization and strive to be your trusted vision partner.  Our success is our clients’ success.  Our collaborative approach to projects with our conservative and robust design process allows KTM Research to successfully complete projects that many others cannot.

Contact KTM Research at info@ktmresearch.com for more information on our vision solutions.

21 Apr

Self-Training Machine Vision on the Fly: Counting Parts and Identifying Novel Objects

KTM Research recently developed a proof of concept system using Halcon software that is able to correctly identify a set of parts such as various sizes and styles of wood screws and rivets without pre-trained shape models or patterns. The system works by having an operator present a number of parts to the system during a training phase. Based on the parts presented, the system can automatically select the best features to identify the parts, and then trains itself using machine-learning. The system then identifies parts based on training data, with some machine-learning model modes even enabling “novelty detection” – in other words, the ability to identify parts as something other than what was trained.

screws

Step 1: Six unique parts are presented by the operator to the camera during the training phase.

During the training phase, the operator shows the system parts either by taking multiple images of a single part, a single image of many of the same part, or some combination. The system automatically segments the images using a threshold to segment the outline of each part into a blob. It then computes all of the 30+ potential feature types of each blob. These features are fed into an algorithm that selects which combination of features are required to sort the objects as the operator did during the training phase.

screws

Step 2: The six unique parts and two additional types of parts that were not trained are presented to the camera.

Note that this same approach could be used to sort screws from nuts regardless of size or sort sizes of nuts depending on how the user presented the parts to the system. This all happens automatically with no other input from the user besides presenting parts to the system.

screws

Step 3: The software identifies the unique parts that the operator trained the system to recognize and the additional parts (in red) that were not trained.

About KTM Research

KTM Research is an engineering firm that specializes in industrial machine vision systems for quality control and vision-guided robotics.  Formed in 2009, we are located in Tualatin, Oregon.  We serve industries in the fields of advanced manufacturing, consumer electronics, bio-tech, food and beverage, research, and logistics.  Our systems have been successfully used by customers across North America and Asia.

Our goal at KTM Research is to be the first call you make when faced with a vision challenge.  Our team of engineers view themselves as an extension of your organization and strive to be your trusted vision partner.  Our success is our clients’ success.  Our collaborative approach to projects with our conservative and robust design process allows KTM Research to successfully complete projects that many others cannot.

Contact KTM Research at info@ktmresearch.com for more information on our vision solutions.

19 Apr

Testing the New Cognex 7802 with SurfaceFX

Cognex In-Sight 7802 Lineup

Update: After writing this article, we did extensive in-depth testing of the new SurfaceFX tool.  Be sure to check out our larger write up on SurfaceFX with many more example images here!

 


KTM Research has been testing the new Cognex 7802 camera and its SurfaceFX capabilities. We are very impressed with this new camera! First, the camera is one of the best built cameras we have seen from Cognex yet. It is clear that there’s been a lot of thought put into the design that will make implementing the camera much easier than before. One of the most notable new features is the the integrated lighting which allows for individual control over the four quadrants. This gives the new camera and software the ability to pick out hard-to-resolve features and text that otherwise is difficult to identify using InSight Explorer.

Compared to the first generation 7000 series sensor, it feels very robust and has a good heft.  The Cognex In-Sight 7802 is a 2MP camera with 1600×1200 resolution.  The new design offers many more options for configuration than before.  Just look at this chart from Cognex showing some of the available options:

Cognex In-Sight 7802 Options

 

In our tests, we have found SurfaceFX capable of minimizing glare from shiny parts and picking out fiducial marks on plates without the need for any off-camera lighting. Another useful application of SurfaceFX is finding raised or sunken marks that are covered by printed text.

wrench image

Small wrench with normal illumination.

wrench with surfacefx

After processing with the Cognex 7802 camera and SurfaceFX, the letters and logo are now legible.

For more information on SurfaceFX, check out our more in depth look here, or click the image below.

Cognex SurfaceFX Featured Image


We have been running long-term target acquisition repeatability studies the last few weeks on the Cognex 7802 and are impressed with the results. Using a 16mm lens, a blue filter, and a blue four quadrant ring light, we have taken about 40,000 images. With the camera riding on the end of a Denso robot, we found 4 pixels of range in X,Y coordinates and 2 degrees of rotation range in finding the intersection of two fiducial lines. This result was repeated with both fiducial marks (using SurfaceFX) and vision targets (using “all banks on imaging” four quadrant lighting image acquisition) through a variety of warehouse lighting conditions. With the camera held still and under fixed lighting conditions, the camera is capable of 0.5 pixel range in X,Y coordinates and 0.2 degree rotation range.

Even under aggressive “torture test” lighting conditions, the stationary camera only has 1.5 pixel range when stationary. The take-away is that the Cognex 7802 camera paired with the quadrant ring light and using the new software tools released in InSight 5.4 has impressive accuracy and repeatability. The Cognex 7802 does better than a Denso robot’s positioning accuracy.

Cognex 7802 riding on the end of a Denso robot for long-term testing.


About KTM Research:

KTM Research is an engineering firm that specializes in industrial machine vision systems for quality control and vision-guided robotics.  Formed in 2009, we are located in Tualatin, Oregon.  We serve industries in the fields of advanced manufacturing, consumer electronics, bio-tech, food and beverage, research, and logistics.  Our systems have been successfully used by customers across North America and Asia.

Our goal at KTM Research is to be the first call you make when faced with a vision challenge.  Our team of engineers view themselves as an extension of your organization and strive to be your trusted vision partner.  Our success is our clients’ success.  Our collaborative approach to projects with our conservative and robust design process allows KTM Research to successfully complete projects that many others cannot.

Contact KTM Research at info@ktmresearch.com for more information on our vision solutions.

09 Feb

Linearity and repeatability of robots in optical scanning applications

The Problem – Visual quality control inspection of complex geometry parts

KTM Research recently investigated the linearity of several robotic arms and a linear stage for use in linear optical scanning of complex geometry parts.  Reproduction of an accurate image (stability) and repeatability are both critically important to many modern optical scanning techniques used in manufacturing quality control processes.  Automated optical inspection of parts on a manufacturing line saves time and money.  We paired a line-scan camera end effector (designed and built in-house) with a UR5 robot, a Denso robot, and a Parker 404XR linear stage for comparison.

A line scan end effector designed by KTM Research.

A line scan end effector designed by KTM Research.

Results of the Study

The UR5’s overall accuracy and repeatability was not well suited for scanning parts as you can see below.  The Denso showed strength in scanning parts with complex surface geometry, depending on the inspection requirements. The Denso scan is not completely stable although it is repeatable and consistent to within ~5 pixels even close to control singularities.  Depending on use case, safety requirements may drive significant complexity and expense for overall system design.  For high-accuracy applications, we found the Parker linear stage to be the best fit for the job.  KTM Research systems using a Parker linear stage and the linear scan method are currently operating in factories in the United States, China, and Malaysia.

A Parker 404XR linear stage produces excellent stable and repeatable results when performing linear scans.  For this reason, we recommend a Parker linear stage when high precision and repeatability are important in linear optical scanning.

Linear optical scan of a UR5 robot.

The UR5 robot produces significant instability and does not have repeatable scan patterns.

Linear optical scan using a Denso robot.

The linear scan using a Denso robot shows some instabilities near singularities in the control system. However, the instabilities are repeatable.

Analysis of the UR5 Robot

A UR5 robotic arm with an optical scanner.

A UR5 robotic arm test setup in KTM Research’s laboratory. An optical scanner is attached to the end of the UR5.

UR5 robots are “collaborative” robots that can work around people without separation from operators and be operated and maintained by people who are not robotics experts.  The lack requirements for guarding/safety with the UR5 allows for a much smaller, simpler, and easy to use work cell for vision analysis as compared to other robotic systems.  For work cells designed to be operated and maintained by personnel without a programming or robotics background, the UR5 can be a good choice to decrease required training and reduce safety risks.

With the KTM Research-designed linear optical scanner end-effector the UR5 was not able to move with enough linear accuracy to generate acceptable imagery for analysis with the line-scan camera even though the UR5’s specifications indicated that it should be possible to do so.  To test the UR5 in a best-case scenario, a ruler was attached to the robot and moved in front of the line-scan camera to evaluate its linear accuracy without any payload.

Linear optical scan of a UR5 robot.

The UR5 robot produces significant instability and does not have repeatable scan patterns.

Best case scenario scan using the UR5.

In a best case scenario where the UR5 was attached to the ruler and the linear optical scanner was held fixed, instabilities were still present in the image and the test image was not repeatable.

We found that the UR5 is not able to manipulate the line-scan end-effector as designed, even though it was within the published specifications.  With the line-scan end-effector mounted in a non-ideal setting, the UR5 shows extremely poor linear motion accuracy. Tested in a “best-case” scenario with almost no payload (a very light ruler), the UR5 still exhibits linear motion that is outside the acceptable limits for high precision linear optical scanning.  For these reasons, we ruled out the UR5 as a possibility for a robotic vision platform used in high precision linear optical scanning.

Analysis of the Denso Robot

Denso robot test setup

A Denso robot test setup with an optical scanner in KTM Research’s laboratory.

Denso robots are high-end motion system that has several key advantages over other systems that enable smooth and precise motion needed for imaging applications and other applications where stability and repeatability are important.  There are advanced control algorithms available for Denso robots that take payload weight and moment of inertia into account when planning motion paths.  Denso software has good singularity management during linear moves that reduce path variations.  The Denso line of robots has the ability to manipulate heavy and awkward loads while maintaining movement path accuracy.  However, Denso robots do require additional complexity, cost, and increased workcell size in the form of required machine guarding and safety systems.  Denso robots are not currently rated to work directly with people.

We found that the Denso is able to easily manipulate the KTM Research-designed end-effector in any orientation or mounting configuration.  The Denso robot velocity is able to exceed the camera’s imaging rate for the lighting configuration that we used.  This means that with a different lighting configuration and a higher speed camera, the Denso could be used to take images even more rapidly.  Our testing of several different lens/camera/robot combinations demonstrated that the optical analysis of a customer’s single three-faced assembly is possible within the motion space of an inverted VS-G Denso robot.  The resulting test images demonstrate some linear motion scan issues that are surmountable in post-processing prior to image analysis.  Linear path inconsistencies of the Denso robot are repeatable within several pixels.

Linear optical scan using a Denso robot.

The linear scan using a Denso robot shows some instabilities near singularities in the control system. However, the instabilities are repeatable.

We concluded that the Denso robot system is usable for vision analysis, but is not ideal.  The linear motion inconsistencies are highly repeatable and thus could be addressed in the models used for vision analysis.  However, this is not ideal because anything that causes the inconsistencies to shift will break models leading to high rate of false-rejects.  Prior to deploying Denso robots in this application, we recommend more testing of the potential for inconsistency shift to ensure robust operation in a production environment to minimize false-rejects.  One consideration that KTM Research customers should take into account is that a work cell for the Denso system requires machine guarding and safety systems that increase cost, complexity, and work cell size.

Analysis of the Parker 404XR Linear Stage

Parker 404XR linear stage.

A Parker 404XR linear stage test setup with a KTM Research-designed linear optical scanner.

Parker linear stages are highly accurate and highly repeatable single axis actuators used in many manufacturing applications.  A linear stage can be paired with a rotating table or with other linear stages to permit linear scanning of complex geometries.  One downside of Parker linear stages is that they are not rated for direct contact with people.  Appropriate safety systems and machine guarding are required for most manufacturing production applications.

A Parker 404XR linear stage produces excellent stable and repeatable results when performing linear scans.

We found that the Parker 404XR linear stage produces excellent results when coupled with the KTM Research-designed linear optical scanner.  There are no stability issues and repeatability is extremely good.  In all but the most complex and difficult scanning applications, a Parker linear stage is a good choice.

Conclusion

Through KTM Research’s laboratory testing, we determined that the UR5 robot is not a suitable solution for high precision optical scanning applications.  The Denso robot is a possibility for scanning parts with complex geometries although the image scan is not completely stable and there is significant complexity and expense involved in deploying a Denso robot due to operator safety requirements.  The Parker 404XR linear stage provided the best results in our tests and received our recommendation to our customers.  There are now KTM Research systems deployed in manufacturing plants the USA and Asia using the Parker 404 XR linear stage.

About KTM Research:

KTM Research is an engineering firm that specializes in industrial machine vision systems for quality control and vision-guided robotics.  Formed in 2009, we are located in Tualatin, Oregon.  We serve industries in the fields of advanced manufacturing, consumer electronics, bio-tech, food and beverage, research, and logistics.  Our systems have been successfully used by customers across North America and Asia.

Our goal at KTM Research is to be the first call you make when faced with a vision challenge.  Our team of engineers view themselves as an extension of your organization and strive to be your trusted vision partner.  Our success is our clients’ success.  Our collaborative approach to projects with our conservative and robust design process allows KTM Research to successfully complete projects that many others cannot.

Contact KTM Research at info@ktmresearch.com for more information on our vision solutions.

18 Jan

Vision Inspection to Reduce Bottle Breakage in Beverage Case Sealer Systems

The problem – Breakage and downtime at the beverage bottle case sealer:

A common problem in bottling operations is inspection of bottles in the case prior to sealing the case for shipping to the customer.  Case sealing machines are unable to detect bottles that are improperly seated in the case.  Attempting to seal cases with improperly seated or misplaced bottles results in bottle breakage and damage to the sealing machines along with glass and other contamination issues that can result in significant production down time.

KTM Research can provide advanced 3D vision systems to inspect cases before they enter the case sealing machine, stopping the line before breakage occurs if any misplaced or missing bottles are detected.

Bottle Inspection:

Bottle inspection on the bottle case sealer looks for proper placement of the bottles in the case prior to case sealing.  The two key modes of failure are protruding bottles and misplaced bottles.  A secondary mode of failure is a missing bottle that can lead to an angry customer one bottle short of a full case.

Misplaced and protruding bottles in a 24 bottle case.

Bottles that are not fully seated are those whose tops are more than 1.25” above the normal bottle top height.  Any bottle exceeding 1.25” should be considered a failure and rejected before the case sealer.  Misplaced bottles are any bottle that is not in a slot.  Often times these bottles lay on top of other bottles and protrude high enough to be broken by the case sealer.  Any cases with a misplaced bottle should also be rejected.

The solution – LMI’s Gocator Laser Profile Scanner:

Bottle inspection is an application ideal for a laser profile scanner.  Laser profile scanners use a laser line along with a camera to interpret the height profile at the intersection of the laser line and the cross-section of the shape being scanned.  If the part being inspected is moved under the laser profile sensor, a height map (z-height) is generated by sampling the laser line profile as the part is scanned.  The laser profiler includes internal processing capabilities that may be able analyze the profiles and provide an output signal indicating whether the case passed or failed the inspection.

Laser profilers are somewhat sensitive to gloss and reflectiveness of the surface.  If there are large differences between different products, a calibration may be required at the beginning of the product batch to recalibrate the sensor.  For instance, if one batch of bottles has shiny foil wrapping while another is covered in matte black labels, recalibration may be needed.

LMI Gocator basic operation principle: A laser scanner images a moving target and captures height profile information for interpretation.

See the case in 3D – See the problems, avoid the breakage:

The laser profiler is the best solution for bottle inspection.  Both the misplaced and protruding bottle conditions are easily detectable in the laser profile.  If desired, missing bottles are also easily detectable so that corrective action can be taken.

The image below is apseudo-color presentation of profile data taken from a case of bottles with some misplaced, missing, and protruding bottles.  Colors represent z-height values from the profile with red indicating the highest areas scanned.  The heights shown in the legend are roughly equivalent to millimeters in this case.  Blue is the bottom of the case and reds are the bottle caps.  While not obvious from the pseudo-color ranges, the protruding bottle has a z-height value of 258 versus the seated bottle height of 233 (25mm difference), which corresponds very closely to the actual protruding bottle height of approximately 1 inch above the seated bottles.

3D profile views of missing and protruding bottles taken with a LMI Gocator.

A 3D profile view is shown to more clearly depict the differences in height between the protruding bottle and the seated bottles.  A misplaced bottle is also detectable in the profile data.  The image below shows the profile and 3D views with a misplaced bottle.  The bottle label is key feature to detect a misplaced bottle.  It is a large area that will have a z-height equal or above the bottle caps.  This condition is also often accompanied by a missing bottle detected in the case (blue square).

 

A misplaced and a missing bottle in a profile and a 3D view taken with LMI’s Gocator.

Note that all the images above are missing a row of bottles at the bottom.  This is due to the shadow of the single profile scanner we used for these preliminary test scans.  Since the bottle inspection requires seeing all the bottle caps, the configuration for the bottle inspection will require two profilers configured to cover the full area inside the case.  The image below depicts the bottle inspection configuration.

Two LMI Gocators operating in “buddy mode” to fully capture height information from inside of a 24 bottle case.

This “buddy” configuration is a standard feature of the LMI Gocator laser profiler, where two Gocators work together to produce a single height map of the complete case interior.  Processing of the height map is then accomplished by the laser profiler and an external PC for inspection analysis.

About KTM Research:

KTM Research is an engineering firm that specializes in industrial machine vision systems for quality control and vision-guided robotics.  Formed in 2009, we are located in Tualatin, Oregon.  We serve industries in the fields of advanced manufacturing, consumer electronics, bio-tech, food and beverage, research, and logistics.  Our systems have been successfully used by customers across North America and Asia.

Our goal at KTM Research is to be the first call you make when faced with a vision challenge.  Our team of engineers view themselves as an extension of your organization and strive to be your trusted vision partner.  Our success is our clients’ success.  Our collaborative approach to projects with our conservative and robust design process allows KTM Research to successfully complete projects that many others cannot.

Contact KTM Research at info@ktmresearch.com for more information on our vision solutions.

15 Jun

Disabling Windows 10 automatic upgrade

While Windows 10 brings many great new features to the Microsoft operating system, many of our industrial and commercial clients have expressed frustration with the persistent way that Microsoft has chosen to repeatedly prompt users to upgrade, and now automatically scheduling upgrades.  In some settings, an unexpected upgrade could cause massive problems, with much higher risk for issues occurring in highly specialized and complex industrial tools and systems.  Issues with automatic updates breaking things is already a well documented problem!

Read More

02 Feb

KTM Research Moves to Tualatin

Beginning in April, KTM Research will be moving to a new location in Tualatin, Oregon.  Our new location is conveniently located minutes from I5 and I205 and offers improved facilities with increased fabrication space to handle our rapidly growing needs.  Please contact us if you have any questions about our new location, or would like to setup a time to tour our new facility.

Read More

09 Oct

Dynamic DNS ddclient with NameCheap

This post will address how to properly configure the dynamic DNS client ddclient when being used with NameCheap.

I often face issues dealing with dynamically assigned IP addresses.  A dynamic IP address is one that can change periodically.  How often depends on a lot on the particular circumstances, my home IP changes only two or three times a year, but one of my clients IP addresses seems to be changing almost daily.  If you find yourself in a situation similar to my client where you need to connect to a network over the internet, but that network’s external IP address is constantly changing, a dynamic DNS (DDNS) service is a good solution.

Read More