Friday, December 23, 2005

[NEWS]本田 ASIMO 智慧型機器人升級

ASIMO是 Honda 投入無數科技研究心血的結晶,更是目前全球少數具備人類雙足行走能力的類人型機器人,它憨厚可愛的型式博得許多人的喜歡;近日更是實現了新升級,落實了對控制組件的改進,運動起來比以前更為靈活(奔跑速度可達 6km/h,為先前 2 倍)。特別是在手腕中內置了6軸力量傳感器,在感知手持物品重量的同時,透過降低致動器的齒輪比和提升控制的柔性等,呈現出承接物體等需要精細控制的動作。目前,新的 ASIMO 機器人利用其身上安裝的可以閱讀身分識別卡片上的微晶片的傳感器,認出從背後走過來的一位女生;也可以作出端盤子,送咖啡等較為複雜的動作,或許不久的將來,公司中的接待都可以由他勝任了,我們的家務活什麼時候也讓ASIMO來給我們承擔,應該不會等太久吧。

Monday, December 19, 2005

[NEWS]Sony robot keeps a third eye on things

TOKYO, Japan (Reuters) -- Robots may not be able to do everything humans can, but the latest version of Sony humanoid robot has something many people might find useful: a third eye.

The QRIO has a camera eye on its forehead that allows it to see several people at once and focus on one person.

The Japanese consumer electronics company's roller-skating robot, QRIO, has now been enlightened with an extra camera eye on its forehead that allows it to see several people at once and focus in on one of them.
At an exhibit on intelligent machines in the glitzy shopping district of Ginza on Friday, Sony unveiled the new and improved QRIO, which walked jauntily toward the audience, swinging its hips to music.
"Hello everyone," it said in Japanese. "I am Sony's QRIO. Let me introduce you to my new camera and improved arms."
It demonstrated its newfound flexibility by wiggling its fingers and waving its arms in a breakdance move.
The toddler-sized QRIO, which stands 58.5 centimeters (23 inches) tall and weighs 7.5 kilograms (16.5 pounds), then turned towards a group of women, responding to one who was waving to it.
"These new capabilities bring humans and machines closer together," said Katsumi Muto, general manager of Sony's Entertainment Robot unit.
"We're aiming for a machine that doesn't just respond to commands but also reaches out to humans."
The new QRIO also showed off its ability to identify blocks by size and color, lift them using its lower body and stack one on top of the other with its dextrous fingers.
"I wonder if I can handle this," QRIO muttered to itself as it carried out the task.
Standing in front of the successfully stacked boxes, QRIO ended the demo with a little victory dance.
Sony plans to start shipping samples next March of a camera module version of QRIO's third eye, which it calls a "chameleon eye."
Muto said Sony had no plans to market QRIO itself but would apply new developments into other products.

[OpenCV] Some Janpanse Turtorial

/* ● CamShift ( 連続順応手段変換 ) アルゴリズムの実装 ● */
/* */
/* 参照元:OpenCV ML でのメッセージ */
/* */
/* */
/* リンク */
/* DirectShow : quartz.lib strmiids.lib strmbase.lib strmbasd.lib */
/* IPL&OpenCV : ipl.lib highgui.lib cv.lib */
/* */
/* 注意 */
/* iCamShift.h をコピーして持ってくる */
/* */
/* 2003.01.30. 高橋 */
/* メイン部分 */
// CamShift 実行
// ウィンドウを表示
cvShowImage( wndname, image );

/* グローバル宣言 */
#include "cv.h"
#include "streams.h"
#include "CV.hpp"
#include "iCamShift.h"
#include "highgui.h"
static bool IsInit = false;
static bool IsTracking = false;
CvCamShiftTracker m_cCamShift;
CvCamShiftParams m_params;
CvRect m_object;
/* コールバック (フレーム取得毎に呼ばれる) */
void CMyCamShift::callback(IplImage *image)
if (IsTracking) // ● [分岐1] トラッキング状態なら
// CamShift を適用 (ヒストグラムを初期化しない)
ApplyCamShift(image, false);
// ヒストグラムモデルに一致する色分布に十字を描画
else // ● [分岐1] トラッキング状態でなければ
// 解像度を取得
CvSize size = cvGetSize( image );
if (IsInit) // ● [分岐2] トラッキングボタンを押したら
// 確率分布の計算領域(始めの赤い四角) の計算
m_object.x = cvRound( size.width * m_params.x );
m_object.y = cvRound( size.height * m_params.y );
m_object.width = cvRound( size.width * m_params.width );
m_object.height = cvRound( size.height * m_params.height );
// CamShift を初めて適用 (ヒストグラムを初期化)
ApplyCamShift( image, true );
// トラッキングOK
IsTracking = true;
else // ● [分岐2] 初期状態ではここにくる
CvPoint p1, p2;
// 確率分布の計算領域(始めの赤い四角) の計算
p1.x = cvRound( size.width * 0.4f );
p1.y = cvRound( size.height * 0.3f );
p2.x = cvRound( size.width * (0.4f + 0.2f));
p2.y = cvRound( size.height * (0.3f + 0.3f));
// 計算領域を表示
cvRectangle( image, p1, p2, CV_RGB(255,0,0), 1);
/* トラッキング開始 */
void CMyCamShift::Start_Tracking()
m_params.x = 0.4f; // 確率分布の計算領域(始めの赤い四角) の初期倍率
m_params.y = 0.3f;
m_params.width = 0.2f;
m_params.height = 0.3f;
m_params.Smin = 20; // channel-1 の最小値
m_params.Vmin = 40; // channel-2 の最小値
m_params.Vmax = 255; // channel-2 の最大値
m_params.bins = 20; // ヒストグラムの数値
m_params.view = 0;
m_params.threshold = 0; // 閾値
IsInit = true; // 初期化OK
IsTracking = false; // トラッキングはまだ
/* CamShift を適用 */
void CMyCamShift::ApplyCamShift(IplImage *image, bool initialize)
CvSize size;
int bins = m_params.bins;
// 色ヒストグラムの設定
m_cCamShift.set_hist_dims( 1, &bins ); // ヒストグラム値
// m_cCamShift.set_thresh( 0, 1, 180 ); // (注)Beta_3 では除外
m_cCamShift.set_threshold( 0 ); // ヒストグラム閾値
m_cCamShift.set_min_ch_val( 1, m_params.Smin ); // channel-1 の最小値
m_cCamShift.set_max_ch_val( 1, 255 ); //      の最大値
m_cCamShift.set_min_ch_val( 2, m_params.Vmin ); // channel-2 の最小値
m_cCamShift.set_max_ch_val( 2, m_params.Vmax ); //      の最大値
// 画像データ取得
cvGetImageRawData( image, 0, 0, &size );
// 探索ウィンドウがアプリのウィンドウを超えた場合の処理
if( m_object.x < 0 ) m_object.x = 0;
if( m_object.x > size.width - m_object.width - 1 )
m_object.x = MAX(0, size.width - m_object.width - 1);
if( m_object.y < 0 ) m_object.y = 0;
if( m_object.y > size.height - m_object.height - 1 )
m_object.y = MAX(0, size.height - m_object.height - 1);
if( m_object.width > size.width - m_object.x )
m_object.width = MIN(size.width, size.width - m_object.x);
if( m_object.height > size.height - m_object.y )
m_object.height = MIN(size.height, size.height - m_object.y);
// 探索ウィンドウの位置をセット
// 色ヒストグラムを初期化するならば
if( initialize )
// ヒストグラムをリセット
// 探索ウィンドウ内の色ヒストグラムを更新
m_cCamShift.update_histogram( (CvImage*)image );
// 探索ウィンドウ内の物体の位置を更新
m_cCamShift.track_object( (CvImage*)image );
// オブジェクトにフィットした探索ウィンドウに再設定
m_object = m_cCamShift.get_window();
/* 十字を描画 */
void CMyCamShift::DrawCross(IplImage *image)
float cs = (float)cos( m_cCamShift.get_orientation() );
float sn = (float)sin( m_cCamShift.get_orientation() );
int x = m_object.x + m_object.width / 2;
int y = m_object.y + m_object.height / 2;
CvPoint p1 = {(int)(x + m_cCamShift.get_length() * cs / 2),
(int)(y + m_cCamShift.get_length() * sn / 2)};
CvPoint p2 = {(int)(x - m_cCamShift.get_length() * cs / 2),
(int)(y - m_cCamShift.get_length() * sn / 2)};
CvPoint p3 = {(int)(x + m_cCamShift.get_width() * sn / 2),
(int)(y - m_cCamShift.get_width() * cs / 2)};
CvPoint p4 = {(int)(x - m_cCamShift.get_width() * sn / 2),
(int)(y + m_cCamShift.get_width() * cs / 2)};
cvLine( image, p1, p2, CV_RGB(255,0,0), 1, 8 ); // 長径
cvLine( image, p4, p3, CV_RGB(0,0,255), 1, 8 ); // 短径
// 以下はデバッグ用
if((p1.x != p2.x) && (p3.x != p4.x))
// 動的に変化する探索ウィンドウを表示
cvRectangle( image,
cvPoint(m_object.x, m_object.y),
cvPoint(m_object.x + m_object.width,
m_object.y + m_object.height),
CV_RGB(255,0,0), 1);
CvFont font;
cvInitFont( &font, CV_FONT_VECTOR0, 0.5, 0.5, 0.0, 2 );
cvPutText( image, "p1", p1, &font, CV_RGB(255,0,0));
cvPutText( image, "p2", p2, &font, CV_RGB(255,0,0));
cvPutText( image, "p3", p3, &font, CV_RGB(0,0,255));
cvPutText( image, "p4", p4, &font, CV_RGB(0,0,255));
// 十字が描けなくなったら設定をリセット
IsInit = false;
IsTracking = false;

Friday, December 16, 2005


#1 機器蟲結構diy參考-連結
各位大大好!!! ... y&CategoryID=19這個網站出了相當多的套件,也有詳細的組裝說明,可以參考其Assembly Guides的結構設計,自己找材料來diy看看!!開始可以用薄木板可以的話可以找abs或pvc板材以上用線鋸機加工較容易要是用金屬板材則須冷作工法,這個可能要問


#1 五足機器人-連結各位好!!這個網站收集許多機器人,其中CHARLOTTE有製作日誌PDF檔,當初就是看了這個才開始收集資料做機器蟲,大家不妨試試看!!

Tuesday, December 13, 2005

[OpenCV] Tracking LED's with OpenCV and CAMSHIFT

Hi Mike,
I must say you are going to do a really nice project.
Well, according to best of my knowledge CAMSHIFT is not heavilydependent on the colour. If you somehow create the blob, usingthresholding or what ever the method CAMSHIFT will help you to track it.But as u said you must give the first position.
Initially, you can have a better algorithm to detect exactly thelocation of LEDs. As soon as you detect the LEDs you can switch to thenormal algorithm where you just track what you have acquired initially.
You have to do is segment the LED section properly and find the contour.After getting the contour of the blob give that to the CAMSHIFTfunction.
int cvCamShift(const CvArr* prob_image, CvRect window, CvTermCriteriacriteria, CvConnectedComp* comp, CvBox2D* box=NULL);
As I remember (used this function in 2002 ;-)), you should give theinitial window through "window" and get the next window to be considered through "comp->rect". Then apply that new window as the initial windowwhen it is called next time.
Hope this will help you. Good Luck![I have used camshift to track a person inside a room. CAMSHIFT workedperfectly. I just found the blob using a normal skin color detectionmethod. And just gave the blob contour bounding box to the CamShift.]
Ruwan Janapriya
-----Original Message-----From: [] On BehalfOf MikaelSent: Tuesday, September 20, 2005 12:39 PMTo: OpenCV@yahoogroups.comSubject: [OpenCV] Tracking LED's with OpenCV and CAMSHIFT
Hi,I'm currently working on a project to build a helicopter which I thenhave to track (and control) using a camera + opencv.
I'm going to mount 4 LED's beneath the helicopter (in a form of across), and then track them from ground using a webcam to figure out itsorientation.I have had thoughts using CAMSHIFT, but after some research I'vediscovered that it relies heavily on colors. Colors could be a problemsince the main plan was to use IR LED's, and that would make most of theimage BW.The other problem is that I have to manually tell CAMSHIFT what itshould track (by putting a small box around one of the LED's), and Iwould rather have a system running that auto discovers the LED points.
Anyone got a suggestion for another tracking method approach maybe?Any help would be greatly appreciated!

Friday, December 02, 2005

[Surface]Accurate and Scalable Surface Representation and Reconstruction from Images


We introduce a new surface representation, the patchwork, to extend the problem of surface reconstruction from multiple images. A patchwork is the combination of several patches that are built one by one. This design potentially allows the reconstruction of an object of arbitrarily large dimensions while preserving a fine level of detail. We formally demonstrate that this strategy leads to a spatial complexity independent of the dimensions of the reconstructed object, and to a time complexity linear with respect to the object area. The former property ensures that we never run out of storage (memory) and the latter means that reconstructing an object can be done in a reasonable amount of time. In addition, we show that the patchwork representation handles equivalently open and closed surfaces whereas most of the existing approaches are limited to a specific scenario (open or closed surface but not both). Most of the existing optimization techniques can be cast into this framework. To illustrate the possibilities offered by this approach, we propose two applications that expose how it dramatically extends a recent accurate graph-cut technique. We first revisit the popular carving techniques. This results in a well-posed reconstruction problem that still enjoys the tractability of voxel space. We also show how we can advantageously combine several image-driven criteria to achieve a finely detailed geometry by surface propagation. The above properties of the patchwork representation and reconstruction are extensively demonstrated on real image sequences.
MIT-LCS-TR-1011.pdf - pdf format, 35 pages long

Monday, November 28, 2005

[Vision]視覺伺服系統 from NSI

一、研究動機與國內外研究現況許多人類的日常動作是以視覺(Vision)接受之訊息為基礎,因此,在不大量使用其他感測訊息或重新設計週遭環境的前提下,若要由機器人(Robot)來執行類似任務,機器人必需具有接收視覺訊息並依其訊息行動之能力,因為機器視覺(Machine vision)能模擬人類的視覺系統而容許其對環境進行非接觸性的測量,所以機器視覺是機器人系統中一項重要的感測器。對於必須與環境互動的系統而言,即時(Real-time)視覺是一種理想的回饋訊號來源。攝影機(Video camera)是一種被動的、非接觸性的、且非破壞性的感測器,它可以在相當大的空間範圍內進行量測、辨識及物件追蹤, 是極理想的感測方式。由於現今之電腦能快速處理視覺資料,以攝影機為感測器之迴授控制系統已實際可行,設計、分析並實現能運用由攝影機獲取之視覺資料並進而與環境互動的“視覺伺服系統”(Visual servoing system)無疑是一個相當具有挑戰性的研究主題,其在理論上和實際上的問題及相關技術的發展與應用已獲國內外學者專家普遍的重視。視覺伺服在機器人控制領域上是一個日漸成熟的研究方向,在控制任務(Control task)由視覺資訊適當定義下,可確保任務執行之精確度,不但不受機器人本身開迴路控制準確度影響,且可因視覺伺服迴授而提升系統穩定度。事實上,精確的攝影機校準(Calibration)已不再是必要的,加上即時影像處理相關硬體價格因多媒體(Multi-media)科技需求而大幅下跌,即時視覺伺服系統之研究與應用價值已備受矚目。實際上,視覺伺服之研究可追溯至1970 年代初期,而最近10 年特別受到重視,它是一個結合即時電腦視覺(Computer vision)、機器人及控制之創新科技領域。在1980 年代,Weiss 首先展開以影像為基礎之視覺伺服控制(image-basedvisual servoing; IBVS)的研究,有別於傳統以笛卡兒空間位置為基礎之視覺伺服(Cartesian-based/position based visual servoing; CBVS)。Feddema 於1989 年完成第一個IBVS 系統之實現。1990 年代初期,Wilson 運用Kalman 濾波器估測物體姿態並藉以改良CBVS 系統。而Corke 提出特徵預測及前饋控制技術達成穩定且高性能之特徵追蹤。在1998 年,Malis 及Chaumette 發展出第一個混合式視覺伺服系統架構,稱之為2.5D 視覺伺服,它以IBVS 控制直線運動之自由度,而運用epipolar geometry 估測所需之旋轉運動;結合了IBVS 及CBVS。由於視覺伺服之研究發展迅速,未來10 年內將可預見更具突破性的研發成果,目前視覺伺服已實際應用於許多領域:1. 製造業自動化應用方面:焊接線追蹤、加工物體抓取及定位、機件組裝2. 軍事應用方面:飛機空中加油、太空梭之自動接合(docking) 。3. 智慧型運輸系統應用方面:汽車導引、飛機自動降落、深海載具操控。4. 模擬人類行為應用方面:打乒乓球及曲棍球、倒單擺平衡控制、擊球及接球。重要參考文獻:
1. S. Hutchinson, G. D. Hager and P. Corke, “A tutorial on visual servo control,”IEEE Trans. on Robotics and Automation, vol. 12, pp. 651-670, Oct. 1996.
2. T. Drummond and R. Cipolla, “Real-time visual tracking of complex structures,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.24, no.7,pp. 932-946, Jul. 2002.
3. N. P. Papanikolopoulos, P. K. Khosla and T. Kanade, “Visual tracking of amoving target by a camera mounted on a robot: a combination of vision andcontrol,” IEEE Trans. on Robotics and Automation, vol. 9, pp. 14-35, 1993.
4. E. Malis, F. Chaumette and S. Boudet, “2-1/2-d visual servoing,” IEEE Trans. onRobotics and Automation, vol. 15, pp. 238-250, Apr. 1999.
5. J. Stavnitzky and D. Capson, “Multiple camera model-based 3-D visual servo,”IEEE Trans. on Robotics and Automation, vol. 16, no. 6, pp. 732-739, Dec. 2000.
6. D. Xiao, K. Ghosh, N. Xi and T. J. Tarn, “Sensor-based hybrid position/forcecontrol of a robot manipulator in an uncalibrated environment,” IEEETransactions on Control Systems Technology, vol. 8, no. 4, pp. 635-645, Feb.2000
7. F. Tendick, J. Voichick, G. Tharp and L. Stark, “A supervisory telerobotic controlsystem using model-based vision feedback,” in Proc. IEEE Int’l Conf. onRobotics and Automation, pp. 2280-2285, 1991.
8. E. Dickmanns and F.R. Schell, “Autonomous landing of airplanes by dynamicmachine vision,” in Proc. IEEE Workshop on Applications of Computer Vision,pp. 172-179, IEEE Comput. Soc. Press, Nov. 1992.
二、研究方向及具體建議近年來,即時電腦視覺(Real-time computer vision)、視覺追蹤(Visual tracking)及視覺運動控制(Vision-based motion control)已逐漸成為重要的研究方向。雖然這些研究領域廣泛地涵括了視覺(Vision)、控制(Control)、機器人學(Robotics)和人工智慧(Artificial intelligence),但他們有一個共同的趨勢,那就是整合性的發展視覺資訊的處理及控制理論。綜觀視覺伺服研究背景,未來發展方向分述如下:
1. 主動式即時視覺追蹤系統(Active visual tracking systems)視覺感測器產生影像空間量測訊號並輸入伺服控制系統,此視覺感測任務本身,在追蹤視覺特徵時,往往亦是一種控制任務,尤其在攝影機方位可即時控制狀況下,更須有效運用其主動功能以擴展視覺感測範圍。結合軟體影像處理技術並搭配適當硬體設備以擷取進而加速即時影像處理,是一有效即時追蹤系統必須改善的方向。實際上,因擷取而導致之時間延遲在即時控制系統的性能上有不可避免的影響,因此,以適當運動模型設計之預測器,有效補償此延遲益形重要。
2. 即時3D 視覺重建(Real-time 3D reconstruction)在實際應用視覺伺服的領域中,三度空間資訊的取得與否,往往影響系統功能的擴充性。此外,在雙眼或多眼視覺之即時控制系統中,影像特徵之對應性(Correspondence)必須有效解決,才能廣泛應用於立體視覺伺服系統。在以笛卡兒空間位置為基礎之視覺伺服控制系統(CBVS)中,若能有效重建3D 資訊,並以即時方式更新資訊,將可大幅改善系統效能,且可擴展其應用領域。
3. 視覺伺服系統架構與分析(Visual servoing system architecture)視覺伺服系統之架構,可朝視覺感測器之安排方式以及後續訊號的分析模式方向進行改良,如近年所發展之混合式視覺伺服系統,結合以影像為基礎之視覺伺服控制(IBCS)及以笛卡兒空間位置為基礎之視覺伺服控制系統(CBVS)之優點以提升系統效能,但仍有實際應用層面的問題及改善的空間。創新的視覺伺服系統架構勢必成為提升系統效能之關鍵。
4. 視覺任務編碼及伺服控制理論(Task encoding and visual servoing)
5. 多攝影機視覺伺服系統(Multi-camera visual servoing)由於各種視覺感測器價格日趨便宜,以多攝影機為感測器之伺服系統已經實際可行。基本上,因重疊可視範圍而量測的多餘視覺資訊,可藉以提升系統的穩健性,而非重疊可視範圍系統則可有效擴展感測範圍。此種視覺伺服架構可應用於大尺寸(Large-scale)之視覺伺服任務,精確且迅速的自主完成以往須以眾多人力輔助之大型定位任務。
6. 視覺與其他感測器之混合控制(Multi-sensor-based hybrid control)單純以視覺為感測方式之系統,可有效完成之控制任務可能會受到限制,至於須執行複雜任務之系統,往往必須藉多種感測器以達成控制目標。雖然個別處理單一感測訊號並藉以執行相對應之任務往往可符合實際任務需求,但整合各種感測資訊進而以系統化的方式有效並精確完成整合控制任務,值得持續研究發展,以有效運用多種感測器於複雜環境中執行控制任務。
7. 智慧型保全及遠端監控系統(Tele-operated monitoring and security systems)藉由以影象為基礎之人機介面,使用者可設定控制目標,而由遠端受控體自主完成任務,此架構可有效應用於自動化倉儲管控系統及居家保全系統之監控,提升遠距監控之彈性與可靠度。亦可避免親臨危險的環境中操作, 如太空、深海及高輻射的環境。
8. 視覺伺服導航系統(Navigation based on visual servoing)視覺伺服可應用於各種不同的載具,如行駛於一般道路之車輛的導航、自動駕駛及自動防撞,飛機之自動起飛與降落,以及飛彈之導引及控制,不但在一般生活上可增添便利性及安全性,在軍事上,因不須發射電波而可避免被敵方偵測及干擾任務之執行。

[Vision]A tutorial on visual servo control

Hutchinson, S. Hager, G.D. Corke, P.I. Dept. of Electr. & Comput. Eng., Illinois Univ., Urbana, IL;
This paper appears in: Robotics and Automation, IEEE Transactions onPublication Date: Oct 1996
[Citing Documents]

Multiple camera model-based 3-D visual servo, Stavnitzky, J.; Capson, D.Robotics and Automation, IEEE Transactions onOn page(s): 732-739, Volume: 16, Issue: 6, Dec 2000

Nonlinear controllability and stability analysis of adaptive image-based systems, Conticelli, F.; Allotta, B.Robotics and Automation, IEEE Transactions onOn page(s): 208-214, Volume: 17, Issue: 2, Apr 2001

Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system, Dixon, W.E.; Dawson, D.M.; Zergeroglu, E.; Behal, A.Systems, Man and Cybernetics, Part B, IEEE Transactions onOn page(s): 341-352, Volume: 31, Issue: 3, Jun 2001

Visually guided landing of an unmanned aerial vehicle, Saripalli, S.; Montgomery, J.F.; Sukhatme, G.S.Robotics and Automation, IEEE Transactions onOn page(s): 371- 380, Volume: 19, Issue: 3, June 2003

Autonomous 3-D positioning of surgical instruments in robotized laparoscopic surgery using visual servoing, Krupa, A.; Gangloff, J.; Doignon, C.; de Mathelin, M.F.; Morel, G.; Leroy, J.; Soler, L.; Marescaux, J.Robotics and Automation, IEEE Transactions onOn page(s): 842- 853, Volume: 19, Issue: 5, Oct. 2003

Nonlinear visual mapping model for 3-D visual tracking with uncalibrated eye-in-hand robotic system, Jianbo Su; Yugeng Xi; Hanebeck, U.D.; Schmidt, G.Systems, Man and Cybernetics, Part B, IEEE Transactions onOn page(s): 652- 659, Volume: 34, Issue: 1, Feb. 2004

Visual servoing invariant to changes in camera-intrinsic parameters, Malis, E.Robotics and Automation, IEEE Transactions onOn page(s): 72- 81, Volume: 20, Issue: 1, Feb. 2004

Vision-based PID control of planar robots, Cervantes, I.; Garrido, R.; Alvarez-Ramirez, J.; Martinez, A.Mechatronics, IEEE/ASME Transactions onOn page(s): 132- 136, Volume: 9, Issue: 1, March 2004

Robust direct visual servo using network-synchronized cameras, Schuurman, D.C.; Capson, D.W.Robotics and Automation, IEEE Transactions onOn page(s): 319- 334, Volume: 20, Issue: 2, April 2004

Keeping features in the field of view in eye-in-hand visual servoing: a switching approach, Chesi, G.; Hashimoto, K.; Prattichizzo, D.; Vicino, A.Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on]On page(s): 908- 914, Volume: 20, Issue: 5, Oct. 2004

Image-based visual servo control of aerial robotic systems using linear image features, Mahony, R.; Hamel, T.Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on]On page(s): 227- 239, Volume: 21, Issue: 2, April 2005

Friday, November 25, 2005

[OpenCV] Install OpenCV for VC6.0

Install OpenCV for VC6.0
OpenCV is one kind of library designed for Computer Vision and is released by Intel Corp. . It includes more than 300 most usually used algorithm in Image Processing, but from now on, it only supports C/C++. The related information is below:
And the related Chinese link:
Download OpenCV Lib
The first step you need to do is to download the OpenCV Lib, also you can download here:
or you can download from my website: OpenCV Beta4.0 [Now is OpenCV Beta5.0 ]
The second step you need to do is download the DirectX form the Microsoft website:
or you can download form my website: DirectX 9.0c (It's a little big.)
After installing the Lib, you may go to the root folder of the OprnCV, and if you didn't change any kind of configuration, the destination may be here:
C:\Program Files\OpenCVThen go to the C:\Program Files\OpenCV\docs\faq.htm
You can solve lots of question by your self from here, but......not sure for every problem....
Recompile OpenCV Lib
Go to the C:\Program Files\OpenCV\_make and you will find opencv.dsw (opencv.sln). Open it and recompile it in Win32 Debug and Release mode and you will get
the lib you need in C:\Program Files\OpenCV\lib
* Don't know how to compile? Just ask your friends or your classmates.
Set the Environment Variable
In the "System Environment Variable" table find the "Path" variable and add this "C:\Program Files\OpenCV\bin" in it.
*The environment variable can be found in "My Computer" .

Customize your project
Whenever you create a new project, you should do follow customization by yourself, or you can't use the OpenCV Lib.
The steps below are collected in faq.htm and plus my little modification.
Customize project settings:
Activate project setting dialog by choosing menu item "Project"->"Settings...".
Select your project in the right pane.
Tune settings, common to both Release and Debug configurations:
Select "Settings For:"->"All Configurations"
Choose "C/C++" tab -> "Preprocessor" category -> "Additional Include Directories:". Add comma-separated relative (to the .dsp file) or absolute paths: (C:\Program Files\OpenCV\cxcore\include,C:\Program Files\OpenCV\cv\include,C:\Program Files\OpenCV\otherlibs\highgui,C:\Program Files\OpenCV\cvaux\include)
Choose "Link" tab -> "Input" category -> "Additional library path:". Add the paths to all neccessary import libraries (cxcore[d].lib cv[d].lib hihghui[d].lib cvaux[d].lib) like: (C:\Program Files\OpenCV\lib\CVd.lib,C:\Program Files\OpenCV\lib\cxcore.lib,C:\Program Files\OpenCV\lib\cxcored.lib,C:\Program Files\OpenCV\lib\cv.lib,C:\Program Files\OpenCV\lib\highgui.lib,C:\Program Files\OpenCV\lib\highguid.lib,C:\Program Files\OpenCV\lib\cvaux.lib,C:\Program Files\OpenCV\lib\cvauxd.lib)
Tune settings for "Debug" configuration
Select "Settings For:"->"Win32 Debug".
Choose "Link" tab -> "General" category -> "Object/library modules". Add "C:\Program Files\OpenCV\lib\CVd.lib" "C:\Program Files\OpenCV\lib\HighGUId.lib" "C:\Program Files\OpenCV\lib\cvauxd.lib" "C:\Program Files\OpenCV\lib\cxcored.lib" (optionally)
You may also want to change location and name of output file. For example, if you want the output .exe file to be put into the project folder, rather than Debug/ subfolder, you may type ./d.exe in "Link" tab -> "General" category -> "Output file name:".
Tune settings for "Release" configuration
Select "Settings For:"->"Win32 Release".
Choose "Link" tab -> "General" category -> "Object/library modules". Add "C:\Program Files\OpenCV\lib\cv.lib" "C:\Program Files\OpenCV\lib\HighGUI.lib" "C:\Program Files\OpenCV\lib\cvaux.lib" "C:\Program Files\OpenCV\lib\cxcore.lib" (optionally)
Optionally, you may change name of the .exe file: type ./.exe in "Link" tab -> "General" category -> "Output file name:".
Add dependency projects into workspace:
Choose from menu: "Project" -> "Insert project into workspace".
Select "C:\Program Files\OpenCV\cv\src\cv.dsp".
Do the same for "C:\Program Files\OpenCV\cxcore\src\cxcore.dsp", "C:\Program Files\OpenCV\cvaux\src\cvaux.dsp", "C:\Program Files\OpenCV\otherlibs\highgui\highgui.dsp".
Set dependencies:
Choose from menu: "Project" -> "Dependencies..."
For "cv" choose "cxcore",
For "cvaux" choose "cv", "cxcore",
for "highgui" choose "cxcore",
for your project choose all: "cxcore", "cv", "cvaux", "highgui".
After above tedious configuration, now you can finally enjoy the convenience of the OprnVC now. Just go to the C:\Program Files\OpenCV\samples\c to get some sample program.
If you don't know how to set up your sample program, here is my test using the sample program provided by OpenCV.
This program doing morphology algorithm: my test file
Here is the sample program's Screen Shot:
Hope this file can do help to you. If you get any problem during your settlement, just go to Google to get the answer, and if you deal the trouble you bumped, you might
mail to me how you solved your problem and I can add it in my page and share to others. Thanks you very much.
You can contact me through:
Error Report
Q: I can't run the file in test.rar!
My Solution: Although I didn't have this kind of problem, but I solved this problem for my friend by adding the dll files in the Debug folder. You can find the dll you need in
C:\Program Files\OpenCV\bin or you might download the dll files from here : dll
Another Solution: You also can add the dll files in C:\Program Files\Microsoft Visual Studio\VC98\Bin
Another link is
A entry-level tutorial by Robert Laganièr
The latest (draft) version of documentation

Wednesday, November 23, 2005

[Vision]Robot hand-eye coordination based on stereo vision(IEEE Xplore)

Robot hand-eye coordination based on stereo vision
This paper appears in: Control Systems Magazine, IEEEPublication Date: Feb 1995Volume: 15, Issue: 1On page(s): 30-39

AbstractThis article describes the theory and implementation of a system that positions a robot manipulator using visual information from two cameras. The system simultaneously tracks the robot end-effector and visual features used to define goal positions. An error signal based on the visual distance between the end-effector and the target is defined, and a control law that moves the robot to drive this error to zero is derived. The control law has been integrated into a system that performs tracking and stereo control on a single processor with no special-purpose hardware at real-time rates. Experiments with the system have shown that the controller is so robust to calibration error that the cameras can be moved several centimeters and rotated several degrees while the system is running with no adverse effects

A tutorial on visual servo control, Hutchinson, S.; Hager, G.D.; Corke, P.I.Robotics and Automation, IEEE Transactions onOn page(s): 651-670, Volume: 12, Issue: 5, Oct 1996
An active visual estimator for dexterous manipulation, Rizzi, A.A.; Koditschek, D.E.Robotics and Automation, IEEE Transactions onOn page(s): 697-713, Volume: 12, Issue: 5, Oct 1996
Robust asymptotically stable visual servoing of planar robots, Kelly, R.Robotics and Automation, IEEE Transactions onOn page(s): 759-766, Volume: 12, Issue: 5, Oct 1996
A modular system for robust positioning using feedback from stereo vision, Hager, G.D.Robotics and Automation, IEEE Transactions onOn page(s): 582-595, Volume: 13, Issue: 4, Aug 1997
End-effector position-orientation measurement, Jing Yuan; Yu, S.L.Robotics and Automation, IEEE Transactions onOn page(s): 592-595, Volume: 15, Issue: 3, Jun 1999Abstract Full Text: PDF (192)
Binocular tracking: integrating perception and control, Bernardino, A.; Santos-Victor, J.Robotics and Automation, IEEE Transactions onOn page(s): 1080-1094, Volume: 15, Issue: 6, Dec 1999
Stable visual servoing of camera-in-hand robotic systems, Kelly, R.; Carelli, R.; Nasisi, O.; Kuchen, B.; Reyes, F.Mechatronics, IEEE/ASME Transactions onOn page(s): 39-48, Volume: 5, Issue: 1, Mar 2000Abstract Full Text: PDF (292)
Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system, Dixon, W.E.; Dawson, D.M.; Zergeroglu, E.; Behal, A.Systems, Man and Cybernetics, Part B, IEEE Transactions onOn page(s): 341-352, Volume: 31, Issue: 3, Jun 2001
Vision-based nonlinear tracking controllers with uncertain robot-camera parameters, Zergeroglu, E.; Dawson, D.M.; de Querioz, M.S.; Behal, A.Mechatronics, IEEE/ASME Transactions onOn page(s): 322-337, Volume: 6, Issue: 3, Sep 2001
Positioning a camera with respect to planar objects of unknown shape by coupling 2-D visual servoing and 3-D estimations, Collewet, C.; Chaumette, F.Robotics and Automation, IEEE Transactions onOn page(s): 322-333, Volume: 18, Issue: 3, Jun 2002
Ping-pong player prototype, Acosta, L.; Rodrigo, J.J.; Mendez, J.A.; Marichal, G.N.; Sigut, M.Robotics & Automation Magazine, IEEEOn page(s): 44- 52, Volume: 10, Issue: 4, Dec. 2003
Improvement of visual perceptual capabilities by feedback structures for robotic system FRIEND, Volosyak, I.; Kouzmitcheva, O.; Ristic, D.; Graser, A.Systems, Man and Cybernetics, Part C, IEEE Transactions onOn page(s): 66- 74, Volume: 35, Issue: 1, Feb. 2005

Friday, November 18, 2005


“冰箱裏有什麼?”“有綠茶和橙汁。”服務生從冰箱裏拿出橙汁倒在客人的杯裏。釜山亞太經合組織(APEC)會場將首次向各國首腦推出像人一樣根據需求提供服務的服務生機器人。此前,國內幾次推出了像人一樣步行的人型機器人,但是與人對話並提供相關服務的機器人還是首次亮相的。 科學技術部人類技能生活支援智慧型機器人技術開發前沿(frontier)項目團開發的“T-Rot”計畫18日在釜山APEC會場的“機器人咖啡吧”向各國首腦提供飲料服務。智慧型機器人項目團團長金汶相稱:“T-Rot是為行動不便的老人和殘疾人開發的服務型機器人。”還介紹說:“該機器人可以與人對話,並拿來所需物品,所以根據‘思考的機器人(Thinking Robot)’的意思起名為T-Rot。” T-Rot配備了識別人和事物的攝像機各2台。因此,T-Rot可以在三維角度識別主人的臉和姿勢以及酒杯和冰箱等周圍事物。同時還具備感知自己位置的鐳射識別功能和聽懂人的話並進行相應對話的聲音識別功能。 但最重要的功能是像人一樣識別事物的觸覺能力。服務機器人經常與人一起生活,因此安全性尤為重要。所以,像人皮膚一樣柔軟和可以準確識別事物觸感的人工皮膚是必需品。標準科學研究院姜大任、金鐘浩(音)博士小組開發的人工皮膚用容易彎曲的聚酰胺,配備可以感知垂直方向壓力和水平方向摩擦力的3軸感測器。握住重100克的物體時,該感測器可以在10克誤差內識別重量。 因此,T-Rot在手握罐裝飲料和塑膠瓶的時候,可以使用不同的力量。與人握手時,T-Rot還能從手指上感知對方手的力量,並採用相應的力量。此前公開的機器人都是根據事先輸入的程式與人握手,因此動作都很僵硬。 金汶相稱:“T-Rot可以在不捏碎的情況下手握雞蛋。已經開發出像人的手指一樣,可以區分距離1毫米之外的刺激的人工皮膚,目前正在進行實驗。” 智慧型機器人項目團計畫,以T-Rot為基礎,2~3年內在市場推出幫助老人步行和可以成為交談對象的服務型機器人。記者 李榮莞

Thursday, November 17, 2005

[LINK] Urban Search and Rescue Robot Competitions

You can find many information about teams attending Robocup rescue cup.
There are also Awardee and Team Description Papers on the link...
I think this rescue project can link with ITRI police search robot.

Wednesday, November 16, 2005


在東京2005年國際機器人展覽會(International Robot Exhibition)即將開幕前,日本電子集團日立在11月14日舉行的記者會上向媒體展示他們的人形機器人"Emiew"。這種兩輪機器人最快可以跑出6公里的時速,比人走路的平均速度還要快一點。(news from日立表示,這種機器人裝有可以避免碰撞的系統,還可以識別出約100個單詞,並且能夠把這些詞組合起來,從而理解和答復指令。本次展覽會將於11月30日至12月3日在東京舉行,參展公司超過150家。 路透/Issei Kato (發稿:黃建松/張敏惠)

Tuesday, November 15, 2005

[Article]藍天空與黑煙囪 詹偉雄  (20051115) chinatimes

九五○年,懷裡兜著三千美元的豐田英二,寒酸造訪底特律,他考察的目標是位於城西、一九一五年由亨利.福特創立的紅河複合式工廠(River Rouge Complex),這座佔地兩千英畝的現代化汽車城一天能生產八千部轎車,是豐田汽車的兩百倍。

活塞拿下一九八九﹣九○球季冠軍那年,南方阿拉巴馬州白堂鎮的十六歲過動兒班.華萊士(Ben Wallace)渾然不知,在他的肩膀上背負著一個神聖任務──把活塞傳統帶入二十一世紀,因為他和底特律人的命運完全一致:眼前的一切,只剩貧窮。


靠著作頭髮積累的盤纏,他報名參加紐約尼克隊「爛仗大前鋒」歐克利(Charles Oakley)舉辦的籃球訓練營,在那兒,歐老徹底毀了他要當個湯瑪斯、魔術強森或喬丹的夢想──投籃不夠準、運球不夠神、一上籃就帶球撞人,更糟的是:他的SAT學業成績太爛,根本沒有一所籃球名校會收留他,歐克利告訴班:除了狠K對手的鼻樑拼防守,籃球場上沒有他的天地。一九九六年,NBA沒有一隊選秀指名班,只有華盛頓子彈隊(後更名為巫師)選他當自由球員,在稀落的板凳時光裡,他靠著白天勤練重量訓練累積的胸膛與臂膀肌肉,練球時精準判斷拋物線反彈的各種路線,逐漸展露拼搶籃板球的長才。兩千年,活塞隊任命推退休球星杜馬斯出任新總裁,他的第一項工程便是買來班.華萊士,誰也沒想到:這位NBA二十一世紀最矮的中鋒,加上其後加入的「快槍俠」漢彌頓、控衛嗆西.畢勒普斯以及另一個「壞孩子」拉希德.華萊士(Rasheed Wallace),會在二○○三﹣四球季,無情地羞辱洛杉磯湖人隊的F4,拿下總冠軍,把底特律藍領憤怒情懷重現在十三年後。



[Vision] Walking Human Avoidance and Detection from A Mobile Robot using 3D Depth Flow

This paper shows walking human avoidance and detection behavior of a mobile robot using 3D DepthFlow. we propose 3D Depth Flow, that is able tomeasure 3D motion vector of every pixels betweentwo time sequential images. First, a denition of 3DDepth Flow and a simple 3D Depth Flow calculationmethod are denoted. Then an implementation of real-time 3D Depth Flow Generation system using stan-dard PC, and experimental results are denoted. Fi-nally, as an application, walking human detection andavoidance task using a mobile robot in real environ-ments is shown.
Paper is available at
Keyword: perception, humanoid, IEEE (GOOGLE)