Monday, November 28, 2005

[Vision]視覺伺服系統 from NSI

一、研究動機與國內外研究現況許多人類的日常動作是以視覺(Vision)接受之訊息為基礎,因此,在不大量使用其他感測訊息或重新設計週遭環境的前提下,若要由機器人(Robot)來執行類似任務,機器人必需具有接收視覺訊息並依其訊息行動之能力,因為機器視覺(Machine vision)能模擬人類的視覺系統而容許其對環境進行非接觸性的測量,所以機器視覺是機器人系統中一項重要的感測器。對於必須與環境互動的系統而言,即時(Real-time)視覺是一種理想的回饋訊號來源。攝影機(Video camera)是一種被動的、非接觸性的、且非破壞性的感測器,它可以在相當大的空間範圍內進行量測、辨識及物件追蹤, 是極理想的感測方式。由於現今之電腦能快速處理視覺資料,以攝影機為感測器之迴授控制系統已實際可行,設計、分析並實現能運用由攝影機獲取之視覺資料並進而與環境互動的“視覺伺服系統”(Visual servoing system)無疑是一個相當具有挑戰性的研究主題,其在理論上和實際上的問題及相關技術的發展與應用已獲國內外學者專家普遍的重視。視覺伺服在機器人控制領域上是一個日漸成熟的研究方向,在控制任務(Control task)由視覺資訊適當定義下,可確保任務執行之精確度,不但不受機器人本身開迴路控制準確度影響,且可因視覺伺服迴授而提升系統穩定度。事實上,精確的攝影機校準(Calibration)已不再是必要的,加上即時影像處理相關硬體價格因多媒體(Multi-media)科技需求而大幅下跌,即時視覺伺服系統之研究與應用價值已備受矚目。實際上,視覺伺服之研究可追溯至1970 年代初期,而最近10 年特別受到重視,它是一個結合即時電腦視覺(Computer vision)、機器人及控制之創新科技領域。在1980 年代,Weiss 首先展開以影像為基礎之視覺伺服控制(image-basedvisual servoing; IBVS)的研究,有別於傳統以笛卡兒空間位置為基礎之視覺伺服(Cartesian-based/position based visual servoing; CBVS)。Feddema 於1989 年完成第一個IBVS 系統之實現。1990 年代初期,Wilson 運用Kalman 濾波器估測物體姿態並藉以改良CBVS 系統。而Corke 提出特徵預測及前饋控制技術達成穩定且高性能之特徵追蹤。在1998 年,Malis 及Chaumette 發展出第一個混合式視覺伺服系統架構,稱之為2.5D 視覺伺服,它以IBVS 控制直線運動之自由度,而運用epipolar geometry 估測所需之旋轉運動;結合了IBVS 及CBVS。由於視覺伺服之研究發展迅速,未來10 年內將可預見更具突破性的研發成果,目前視覺伺服已實際應用於許多領域:1. 製造業自動化應用方面:焊接線追蹤、加工物體抓取及定位、機件組裝2. 軍事應用方面:飛機空中加油、太空梭之自動接合(docking) 。3. 智慧型運輸系統應用方面:汽車導引、飛機自動降落、深海載具操控。4. 模擬人類行為應用方面:打乒乓球及曲棍球、倒單擺平衡控制、擊球及接球。重要參考文獻:
1. S. Hutchinson, G. D. Hager and P. Corke, “A tutorial on visual servo control,”IEEE Trans. on Robotics and Automation, vol. 12, pp. 651-670, Oct. 1996.
2. T. Drummond and R. Cipolla, “Real-time visual tracking of complex structures,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.24, no.7,pp. 932-946, Jul. 2002.
3. N. P. Papanikolopoulos, P. K. Khosla and T. Kanade, “Visual tracking of amoving target by a camera mounted on a robot: a combination of vision andcontrol,” IEEE Trans. on Robotics and Automation, vol. 9, pp. 14-35, 1993.
4. E. Malis, F. Chaumette and S. Boudet, “2-1/2-d visual servoing,” IEEE Trans. onRobotics and Automation, vol. 15, pp. 238-250, Apr. 1999.
5. J. Stavnitzky and D. Capson, “Multiple camera model-based 3-D visual servo,”IEEE Trans. on Robotics and Automation, vol. 16, no. 6, pp. 732-739, Dec. 2000.
6. D. Xiao, K. Ghosh, N. Xi and T. J. Tarn, “Sensor-based hybrid position/forcecontrol of a robot manipulator in an uncalibrated environment,” IEEETransactions on Control Systems Technology, vol. 8, no. 4, pp. 635-645, Feb.2000
7. F. Tendick, J. Voichick, G. Tharp and L. Stark, “A supervisory telerobotic controlsystem using model-based vision feedback,” in Proc. IEEE Int’l Conf. onRobotics and Automation, pp. 2280-2285, 1991.
8. E. Dickmanns and F.R. Schell, “Autonomous landing of airplanes by dynamicmachine vision,” in Proc. IEEE Workshop on Applications of Computer Vision,pp. 172-179, IEEE Comput. Soc. Press, Nov. 1992.
二、研究方向及具體建議近年來,即時電腦視覺(Real-time computer vision)、視覺追蹤(Visual tracking)及視覺運動控制(Vision-based motion control)已逐漸成為重要的研究方向。雖然這些研究領域廣泛地涵括了視覺(Vision)、控制(Control)、機器人學(Robotics)和人工智慧(Artificial intelligence),但他們有一個共同的趨勢,那就是整合性的發展視覺資訊的處理及控制理論。綜觀視覺伺服研究背景,未來發展方向分述如下:
1. 主動式即時視覺追蹤系統(Active visual tracking systems)視覺感測器產生影像空間量測訊號並輸入伺服控制系統,此視覺感測任務本身,在追蹤視覺特徵時,往往亦是一種控制任務,尤其在攝影機方位可即時控制狀況下,更須有效運用其主動功能以擴展視覺感測範圍。結合軟體影像處理技術並搭配適當硬體設備以擷取進而加速即時影像處理,是一有效即時追蹤系統必須改善的方向。實際上,因擷取而導致之時間延遲在即時控制系統的性能上有不可避免的影響,因此,以適當運動模型設計之預測器,有效補償此延遲益形重要。
2. 即時3D 視覺重建(Real-time 3D reconstruction)在實際應用視覺伺服的領域中,三度空間資訊的取得與否,往往影響系統功能的擴充性。此外,在雙眼或多眼視覺之即時控制系統中,影像特徵之對應性(Correspondence)必須有效解決,才能廣泛應用於立體視覺伺服系統。在以笛卡兒空間位置為基礎之視覺伺服控制系統(CBVS)中,若能有效重建3D 資訊,並以即時方式更新資訊,將可大幅改善系統效能,且可擴展其應用領域。
3. 視覺伺服系統架構與分析(Visual servoing system architecture)視覺伺服系統之架構,可朝視覺感測器之安排方式以及後續訊號的分析模式方向進行改良,如近年所發展之混合式視覺伺服系統,結合以影像為基礎之視覺伺服控制(IBCS)及以笛卡兒空間位置為基礎之視覺伺服控制系統(CBVS)之優點以提升系統效能,但仍有實際應用層面的問題及改善的空間。創新的視覺伺服系統架構勢必成為提升系統效能之關鍵。
4. 視覺任務編碼及伺服控制理論(Task encoding and visual servoing)
如何以可量測之視覺訊號確認三度空間控制任務之精確完成,是視覺伺服系統成功與否的首要條件。由於原始控制任務定義於無法直接精確量測之三度空間,重新定義控制任務於適當選定之空間,以確認原始控制任務在視覺感測器未校準的情況下仍可精確完成,勢在必行,此稱之為任務編碼。基於此任務編碼可進一步發展各種迴授控制器,運用控制器之穩健性,在視覺感測未精確校準甚或完全未校準下,精確達成控制目標。
5. 多攝影機視覺伺服系統(Multi-camera visual servoing)由於各種視覺感測器價格日趨便宜,以多攝影機為感測器之伺服系統已經實際可行。基本上,因重疊可視範圍而量測的多餘視覺資訊,可藉以提升系統的穩健性,而非重疊可視範圍系統則可有效擴展感測範圍。此種視覺伺服架構可應用於大尺寸(Large-scale)之視覺伺服任務,精確且迅速的自主完成以往須以眾多人力輔助之大型定位任務。
6. 視覺與其他感測器之混合控制(Multi-sensor-based hybrid control)單純以視覺為感測方式之系統,可有效完成之控制任務可能會受到限制,至於須執行複雜任務之系統,往往必須藉多種感測器以達成控制目標。雖然個別處理單一感測訊號並藉以執行相對應之任務往往可符合實際任務需求,但整合各種感測資訊進而以系統化的方式有效並精確完成整合控制任務,值得持續研究發展,以有效運用多種感測器於複雜環境中執行控制任務。
7. 智慧型保全及遠端監控系統(Tele-operated monitoring and security systems)藉由以影象為基礎之人機介面,使用者可設定控制目標,而由遠端受控體自主完成任務,此架構可有效應用於自動化倉儲管控系統及居家保全系統之監控,提升遠距監控之彈性與可靠度。亦可避免親臨危險的環境中操作, 如太空、深海及高輻射的環境。
8. 視覺伺服導航系統(Navigation based on visual servoing)視覺伺服可應用於各種不同的載具,如行駛於一般道路之車輛的導航、自動駕駛及自動防撞,飛機之自動起飛與降落,以及飛彈之導引及控制,不但在一般生活上可增添便利性及安全性,在軍事上,因不須發射電波而可避免被敵方偵測及干擾任務之執行。

[Vision]A tutorial on visual servo control

Hutchinson, S. Hager, G.D. Corke, P.I. Dept. of Electr. & Comput. Eng., Illinois Univ., Urbana, IL;
This paper appears in: Robotics and Automation, IEEE Transactions onPublication Date: Oct 1996
[Citing Documents]

17
Multiple camera model-based 3-D visual servo, Stavnitzky, J.; Capson, D.Robotics and Automation, IEEE Transactions onOn page(s): 732-739, Volume: 16, Issue: 6, Dec 2000

22
Nonlinear controllability and stability analysis of adaptive image-based systems, Conticelli, F.; Allotta, B.Robotics and Automation, IEEE Transactions onOn page(s): 208-214, Volume: 17, Issue: 2, Apr 2001

23
Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system, Dixon, W.E.; Dawson, D.M.; Zergeroglu, E.; Behal, A.Systems, Man and Cybernetics, Part B, IEEE Transactions onOn page(s): 341-352, Volume: 31, Issue: 3, Jun 2001

43
Visually guided landing of an unmanned aerial vehicle, Saripalli, S.; Montgomery, J.F.; Sukhatme, G.S.Robotics and Automation, IEEE Transactions onOn page(s): 371- 380, Volume: 19, Issue: 3, June 2003

46
Autonomous 3-D positioning of surgical instruments in robotized laparoscopic surgery using visual servoing, Krupa, A.; Gangloff, J.; Doignon, C.; de Mathelin, M.F.; Morel, G.; Leroy, J.; Soler, L.; Marescaux, J.Robotics and Automation, IEEE Transactions onOn page(s): 842- 853, Volume: 19, Issue: 5, Oct. 2003

48
Nonlinear visual mapping model for 3-D visual tracking with uncalibrated eye-in-hand robotic system, Jianbo Su; Yugeng Xi; Hanebeck, U.D.; Schmidt, G.Systems, Man and Cybernetics, Part B, IEEE Transactions onOn page(s): 652- 659, Volume: 34, Issue: 1, Feb. 2004

49
Visual servoing invariant to changes in camera-intrinsic parameters, Malis, E.Robotics and Automation, IEEE Transactions onOn page(s): 72- 81, Volume: 20, Issue: 1, Feb. 2004

50
Vision-based PID control of planar robots, Cervantes, I.; Garrido, R.; Alvarez-Ramirez, J.; Martinez, A.Mechatronics, IEEE/ASME Transactions onOn page(s): 132- 136, Volume: 9, Issue: 1, March 2004

51
Robust direct visual servo using network-synchronized cameras, Schuurman, D.C.; Capson, D.W.Robotics and Automation, IEEE Transactions onOn page(s): 319- 334, Volume: 20, Issue: 2, April 2004

56
Keeping features in the field of view in eye-in-hand visual servoing: a switching approach, Chesi, G.; Hashimoto, K.; Prattichizzo, D.; Vicino, A.Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on]On page(s): 908- 914, Volume: 20, Issue: 5, Oct. 2004

60
Image-based visual servo control of aerial robotic systems using linear image features, Mahony, R.; Hamel, T.Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on]On page(s): 227- 239, Volume: 21, Issue: 2, April 2005

Friday, November 25, 2005

[OpenCV] Install OpenCV for VC6.0

Install OpenCV for VC6.0
OpenCV is one kind of library designed for Computer Vision and is released by Intel Corp. . It includes more than 300 most usually used algorithm in Image Processing, but from now on, it only supports C/C++. The related information is below:
http://www.intel.com/research/mrl/research/opencv/index.htm#press
And the related Chinese link:
http://www.assuredigit.com/forum/display_topic_threads.asp?ForumID=11&TopicID=3471&PagePosition=1
 
Download OpenCV Lib
The first step you need to do is to download the OpenCV Lib, also you can download here:
http://sourceforge.net/projects/opencvlibrary/
or you can download from my website: OpenCV Beta4.0 [Now is OpenCV Beta5.0 ]
The second step you need to do is download the DirectX form the Microsoft website:
http://msdn.microsoft.com/directx
or you can download form my website: DirectX 9.0c (It's a little big.)
 
After installing the Lib, you may go to the root folder of the OprnCV, and if you didn't change any kind of configuration, the destination may be here:
C:\Program Files\OpenCVThen go to the C:\Program Files\OpenCV\docs\faq.htm
You can solve lots of question by your self from here, but......not sure for every problem....
Recompile OpenCV Lib
Go to the C:\Program Files\OpenCV\_make and you will find opencv.dsw (opencv.sln). Open it and recompile it in Win32 Debug and Release mode and you will get
the lib you need in C:\Program Files\OpenCV\lib
* Don't know how to compile? Just ask your friends or your classmates.
Set the Environment Variable
In the "System Environment Variable" table find the "Path" variable and add this "C:\Program Files\OpenCV\bin" in it.
*The environment variable can be found in "My Computer" .

Customize your project
Whenever you create a new project, you should do follow customization by yourself, or you can't use the OpenCV Lib.
The steps below are collected in faq.htm and plus my little modification.
Customize project settings:
Activate project setting dialog by choosing menu item "Project"->"Settings...".
Select your project in the right pane.
Tune settings, common to both Release and Debug configurations:
Select "Settings For:"->"All Configurations"
Choose "C/C++" tab -> "Preprocessor" category -> "Additional Include Directories:". Add comma-separated relative (to the .dsp file) or absolute paths: (C:\Program Files\OpenCV\cxcore\include,C:\Program Files\OpenCV\cv\include,C:\Program Files\OpenCV\otherlibs\highgui,C:\Program Files\OpenCV\cvaux\include)
Choose "Link" tab -> "Input" category -> "Additional library path:". Add the paths to all neccessary import libraries (cxcore[d].lib cv[d].lib hihghui[d].lib cvaux[d].lib) like: (C:\Program Files\OpenCV\lib\CVd.lib,C:\Program Files\OpenCV\lib\cxcore.lib,C:\Program Files\OpenCV\lib\cxcored.lib,C:\Program Files\OpenCV\lib\cv.lib,C:\Program Files\OpenCV\lib\highgui.lib,C:\Program Files\OpenCV\lib\highguid.lib,C:\Program Files\OpenCV\lib\cvaux.lib,C:\Program Files\OpenCV\lib\cvauxd.lib)
Tune settings for "Debug" configuration
Select "Settings For:"->"Win32 Debug".
Choose "Link" tab -> "General" category -> "Object/library modules". Add "C:\Program Files\OpenCV\lib\CVd.lib" "C:\Program Files\OpenCV\lib\HighGUId.lib" "C:\Program Files\OpenCV\lib\cvauxd.lib" "C:\Program Files\OpenCV\lib\cxcored.lib" (optionally)
You may also want to change location and name of output file. For example, if you want the output .exe file to be put into the project folder, rather than Debug/ subfolder, you may type ./d.exe in "Link" tab -> "General" category -> "Output file name:".
Tune settings for "Release" configuration
Select "Settings For:"->"Win32 Release".
Choose "Link" tab -> "General" category -> "Object/library modules". Add "C:\Program Files\OpenCV\lib\cv.lib" "C:\Program Files\OpenCV\lib\HighGUI.lib" "C:\Program Files\OpenCV\lib\cvaux.lib" "C:\Program Files\OpenCV\lib\cxcore.lib" (optionally)
Optionally, you may change name of the .exe file: type ./.exe in "Link" tab -> "General" category -> "Output file name:".
Add dependency projects into workspace:
Choose from menu: "Project" -> "Insert project into workspace".
Select "C:\Program Files\OpenCV\cv\src\cv.dsp".
Do the same for "C:\Program Files\OpenCV\cxcore\src\cxcore.dsp", "C:\Program Files\OpenCV\cvaux\src\cvaux.dsp", "C:\Program Files\OpenCV\otherlibs\highgui\highgui.dsp".
Set dependencies:
Choose from menu: "Project" -> "Dependencies..."
For "cv" choose "cxcore",
For "cvaux" choose "cv", "cxcore",
for "highgui" choose "cxcore",
for your project choose all: "cxcore", "cv", "cvaux", "highgui".
After above tedious configuration, now you can finally enjoy the convenience of the OprnVC now. Just go to the C:\Program Files\OpenCV\samples\c to get some sample program.
If you don't know how to set up your sample program, here is my test using the sample program provided by OpenCV.
This program doing morphology algorithm: my test file
Here is the sample program's Screen Shot:
 
Hope this file can do help to you. If you get any problem during your settlement, just go to Google to get the answer, and if you deal the trouble you bumped, you might
mail to me how you solved your problem and I can add it in my page and share to others. Thanks you very much.
You can contact me through: keynes@cmlab.csie.ntu.edu.tw
Error Report
Q: I can't run the file in test.rar!
My Solution: Although I didn't have this kind of problem, but I solved this problem for my friend by adding the dll files in the Debug folder. You can find the dll you need in
C:\Program Files\OpenCV\bin or you might download the dll files from here : dll
Another Solution: You also can add the dll files in C:\Program Files\Microsoft Visual Studio\VC98\Bin
[Origional]http://graphics.csie.ntu.edu.tw/~keynes/OpenCV/
Another link is http://graphics.csie.ntu.edu.tw/~jean/graphix/tech/25/
A entry-level tutorial by Robert Laganièr
The latest (draft) version of documentation

Wednesday, November 23, 2005

[Vision]Robot hand-eye coordination based on stereo vision(IEEE Xplore)

Robot hand-eye coordination based on stereo vision
This paper appears in: Control Systems Magazine, IEEEPublication Date: Feb 1995Volume: 15, Issue: 1On page(s): 30-39

AbstractThis article describes the theory and implementation of a system that positions a robot manipulator using visual information from two cameras. The system simultaneously tracks the robot end-effector and visual features used to define goal positions. An error signal based on the visual distance between the end-effector and the target is defined, and a control law that moves the robot to drive this error to zero is derived. The control law has been integrated into a system that performs tracking and stereo control on a single processor with no special-purpose hardware at real-time rates. Experiments with the system have shown that the controller is so robust to calibration error that the cameras can be moved several centimeters and rotated several degrees while the system is running with no adverse effects


1
A tutorial on visual servo control, Hutchinson, S.; Hager, G.D.; Corke, P.I.Robotics and Automation, IEEE Transactions onOn page(s): 651-670, Volume: 12, Issue: 5, Oct 1996
2
An active visual estimator for dexterous manipulation, Rizzi, A.A.; Koditschek, D.E.Robotics and Automation, IEEE Transactions onOn page(s): 697-713, Volume: 12, Issue: 5, Oct 1996
3
Robust asymptotically stable visual servoing of planar robots, Kelly, R.Robotics and Automation, IEEE Transactions onOn page(s): 759-766, Volume: 12, Issue: 5, Oct 1996
4
A modular system for robust positioning using feedback from stereo vision, Hager, G.D.Robotics and Automation, IEEE Transactions onOn page(s): 582-595, Volume: 13, Issue: 4, Aug 1997
5
End-effector position-orientation measurement, Jing Yuan; Yu, S.L.Robotics and Automation, IEEE Transactions onOn page(s): 592-595, Volume: 15, Issue: 3, Jun 1999Abstract Full Text: PDF (192)
6
Binocular tracking: integrating perception and control, Bernardino, A.; Santos-Victor, J.Robotics and Automation, IEEE Transactions onOn page(s): 1080-1094, Volume: 15, Issue: 6, Dec 1999
7
Stable visual servoing of camera-in-hand robotic systems, Kelly, R.; Carelli, R.; Nasisi, O.; Kuchen, B.; Reyes, F.Mechatronics, IEEE/ASME Transactions onOn page(s): 39-48, Volume: 5, Issue: 1, Mar 2000Abstract Full Text: PDF (292)
8
Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system, Dixon, W.E.; Dawson, D.M.; Zergeroglu, E.; Behal, A.Systems, Man and Cybernetics, Part B, IEEE Transactions onOn page(s): 341-352, Volume: 31, Issue: 3, Jun 2001
9
Vision-based nonlinear tracking controllers with uncertain robot-camera parameters, Zergeroglu, E.; Dawson, D.M.; de Querioz, M.S.; Behal, A.Mechatronics, IEEE/ASME Transactions onOn page(s): 322-337, Volume: 6, Issue: 3, Sep 2001
10
Positioning a camera with respect to planar objects of unknown shape by coupling 2-D visual servoing and 3-D estimations, Collewet, C.; Chaumette, F.Robotics and Automation, IEEE Transactions onOn page(s): 322-333, Volume: 18, Issue: 3, Jun 2002
11
Ping-pong player prototype, Acosta, L.; Rodrigo, J.J.; Mendez, J.A.; Marichal, G.N.; Sigut, M.Robotics & Automation Magazine, IEEEOn page(s): 44- 52, Volume: 10, Issue: 4, Dec. 2003
12
Improvement of visual perceptual capabilities by feedback structures for robotic system FRIEND, Volosyak, I.; Kouzmitcheva, O.; Ristic, D.; Graser, A.Systems, Man and Cybernetics, Part C, IEEE Transactions onOn page(s): 66- 74, Volume: 35, Issue: 1, Feb. 2005

Friday, November 18, 2005

[NEWS]人體觸覺機器人T-Rot向APEC首腦提供飲料服務


在釜山APEC會議場內的機器人咖啡廳,服務生機器人“T-Rot”正在給金汶相團長倒飲料。T-Rot的手指上配備了與人體觸覺相差無幾的感測器,因此可以根據物品的材料,用不同的力量握住物品。
-------------------------------------------------
“冰箱裏有什麼?”“有綠茶和橙汁。”服務生從冰箱裏拿出橙汁倒在客人的杯裏。釜山亞太經合組織(APEC)會場將首次向各國首腦推出像人一樣根據需求提供服務的服務生機器人。此前,國內幾次推出了像人一樣步行的人型機器人,但是與人對話並提供相關服務的機器人還是首次亮相的。 科學技術部人類技能生活支援智慧型機器人技術開發前沿(frontier)項目團開發的“T-Rot”計畫18日在釜山APEC會場的“機器人咖啡吧”向各國首腦提供飲料服務。智慧型機器人項目團團長金汶相稱:“T-Rot是為行動不便的老人和殘疾人開發的服務型機器人。”還介紹說:“該機器人可以與人對話,並拿來所需物品,所以根據‘思考的機器人(Thinking Robot)’的意思起名為T-Rot。” T-Rot配備了識別人和事物的攝像機各2台。因此,T-Rot可以在三維角度識別主人的臉和姿勢以及酒杯和冰箱等周圍事物。同時還具備感知自己位置的鐳射識別功能和聽懂人的話並進行相應對話的聲音識別功能。 但最重要的功能是像人一樣識別事物的觸覺能力。服務機器人經常與人一起生活,因此安全性尤為重要。所以,像人皮膚一樣柔軟和可以準確識別事物觸感的人工皮膚是必需品。標準科學研究院姜大任、金鐘浩(音)博士小組開發的人工皮膚用容易彎曲的聚酰胺,配備可以感知垂直方向壓力和水平方向摩擦力的3軸感測器。握住重100克的物體時,該感測器可以在10克誤差內識別重量。 因此,T-Rot在手握罐裝飲料和塑膠瓶的時候,可以使用不同的力量。與人握手時,T-Rot還能從手指上感知對方手的力量,並採用相應的力量。此前公開的機器人都是根據事先輸入的程式與人握手,因此動作都很僵硬。 金汶相稱:“T-Rot可以在不捏碎的情況下手握雞蛋。已經開發出像人的手指一樣,可以區分距離1毫米之外的刺激的人工皮膚,目前正在進行實驗。” 智慧型機器人項目團計畫,以T-Rot為基礎,2~3年內在市場推出幫助老人步行和可以成為交談對象的服務型機器人。記者 李榮莞 ywlee@chosun.com

Thursday, November 17, 2005

[LINK] Urban Search and Rescue Robot Competitions


You can find many information about teams attending Robocup rescue cup.
There are also Awardee and Team Description Papers on the link...
http://robotarenas.nist.gov/awardee_papers.htm
I think this rescue project can link with ITRI police search robot.

Wednesday, November 16, 2005

[NEWS]日立人形機器人"Emiew"


在東京2005年國際機器人展覽會(International Robot Exhibition)即將開幕前,日本電子集團日立在11月14日舉行的記者會上向媒體展示他們的人形機器人"Emiew"。這種兩輪機器人最快可以跑出6公里的時速,比人走路的平均速度還要快一點。(news from yam.com)日立表示,這種機器人裝有可以避免碰撞的系統,還可以識別出約100個單詞,並且能夠把這些詞組合起來,從而理解和答復指令。本次展覽會將於11月30日至12月3日在東京舉行,參展公司超過150家。 路透/Issei Kato (發稿:黃建松/張敏惠) http://www.robotw.com/modules/news/article.php?storyid=733

Tuesday, November 15, 2005

[Article]藍天空與黑煙囪 詹偉雄  (20051115) chinatimes

九五○年,懷裡兜著三千美元的豐田英二,寒酸造訪底特律,他考察的目標是位於城西、一九一五年由亨利.福特創立的紅河複合式工廠(River Rouge Complex),這座佔地兩千英畝的現代化汽車城一天能生產八千部轎車,是豐田汽車的兩百倍。

在感慨中返國的豐田英二,把自家汽車廠廠長大野耐一叫進辦公室,「用最快的速度,追上福特的生產力」,接下來的豐田奇蹟你知我知,不勞多說,但值得一提的卻是「英二意志」所帶來的「蝴蝶效應」:當年富士山下的決戰宣言,居然造就後續NBA籃球史上一支齷齪球風的偉大球隊。
整個二十世紀八○年代,底特律城被豐田Corolla小車群拆解得七零八落,十五萬失業藍領家庭把壓抑怨氣完全投注到底特律活塞隊上,而這悲劇十年的尾聲,便是在中鋒藍比爾、神射後衛杜馬斯、搖擺控衛湯瑪斯以及「小蟲」籃板王羅德曼等「壞孩子」們以憤怒肉搏球風拿下兩座總冠軍後而告終。
活塞拿下一九八九﹣九○球季冠軍那年,南方阿拉巴馬州白堂鎮的十六歲過動兒班.華萊士(Ben Wallace)渾然不知,在他的肩膀上背負著一個神聖任務──把活塞傳統帶入二十一世紀,因為他和底特律人的命運完全一致:眼前的一切,只剩貧窮。

班出身典型南方務農家庭,大小共有十一個小孩(他排行第十),全家營生靠的是母親Sadie耕作的一畝小棉花田,小孩們的衣服都是由母親縫製,而且事實上──華萊士家自始至終未曾擁有過一部汽車,而且還是當地最後一戶接上電的油燈家庭。班自幼便愛打籃球,但由於個子矮小,他很早便警覺到僅能藉著皮球的反彈、抄截或於邊線外的飛身撲搶,才能摸到一點球皮,待他身高逐漸抽長到六呎七吋,這足智多謀的場中技巧,便使他順理成章地成為孩子王,「我可以像以賽亞.湯瑪斯般遠射、像魔術強森不眨眼傳切,也會喬丹那樣垂直扣籃,」班後來回憶道。但白堂鎮畢竟是個小不點,沒有任何一位籃球球探會光顧這兒,班的大部分時間反而是花在「髮型雕琢」上,一雙巧手和與生俱有的造型天份,幫他每作一個頭便賺進三塊美金。

靠著作頭髮積累的盤纏,他報名參加紐約尼克隊「爛仗大前鋒」歐克利(Charles Oakley)舉辦的籃球訓練營,在那兒,歐老徹底毀了他要當個湯瑪斯、魔術強森或喬丹的夢想──投籃不夠準、運球不夠神、一上籃就帶球撞人,更糟的是:他的SAT學業成績太爛,根本沒有一所籃球名校會收留他,歐克利告訴班:除了狠K對手的鼻樑拼防守,籃球場上沒有他的天地。一九九六年,NBA沒有一隊選秀指名班,只有華盛頓子彈隊(後更名為巫師)選他當自由球員,在稀落的板凳時光裡,他靠著白天勤練重量訓練累積的胸膛與臂膀肌肉,練球時精準判斷拋物線反彈的各種路線,逐漸展露拼搶籃板球的長才。兩千年,活塞隊任命推退休球星杜馬斯出任新總裁,他的第一項工程便是買來班.華萊士,誰也沒想到:這位NBA二十一世紀最矮的中鋒,加上其後加入的「快槍俠」漢彌頓、控衛嗆西.畢勒普斯以及另一個「壞孩子」拉希德.華萊士(Rasheed Wallace),會在二○○三﹣四球季,無情地羞辱洛杉磯湖人隊的F4,拿下總冠軍,把底特律藍領憤怒情懷重現在十三年後。

去年和印地安納溜馬打過一場群架,班.華萊士平靜地接受禁賽處分,「事情發生就接受吧,畢竟我們都是平凡人!」所有底特律球迷都聲援他,因為大家都知道班是懷著複雜情緒在打球──母親Sadie先前在百貨公司滑了一跤,不治辭世。

車過底特律,別忘了仰望天空上的黑色煙囪──那是所有平凡人的希望,從身無盤纏的豐田英二,到一票一輩子做不成明星卻摘下冠軍戒指的人……。

[Vision] Walking Human Avoidance and Detection from A Mobile Robot using 3D Depth Flow

Abstract:
This paper shows walking human avoidance and detection behavior of a mobile robot using 3D DepthFlow. we propose 3D Depth Flow, that is able tomeasure 3D motion vector of every pixels betweentwo time sequential images. First, a denition of 3DDepth Flow and a simple 3D Depth Flow calculationmethod are denoted. Then an implementation of real-time 3D Depth Flow Generation system using stan-dard PC, and experimental results are denoted. Fi-nally, as an application, walking human detection andavoidance task using a mobile robot in real environ-ments is shown.
Paper is available at
http://www.jsk.t.u-tokyo.ac.jp/~k-okada/paper/2001_icra_okada_3dflow.pdf
Keyword: perception, humanoid, IEEE (GOOGLE)