Tuesday, August 28, 2007

導覽機器人


日本Toyota汽車發展出導覽機器人「Robina」,透過輪子可以自由移動。「Robina」有一點二公尺高,圖為二十七日機器人在Toyota的展覽館執勤。(法新社)

Wednesday, August 22, 2007

步行機器人揭開人類邁步前行之謎

來自德國的研究人員日前表示,一種能適應不同地形的步行機器人(walking robot)將可協助科學家理解人類如何行走的奧秘,甚至在未來改善針對脊髓神經(spinal cord)和其它部位損傷的治療方法。

發明機器人的研究人員表示,過去這款名為RunBot的30公分高機器人只能在平地上行走,遇到斜坡就會跌倒;但自從採用了紅外線眼(infrared eye)之後,RunBot現在可以探測行進路線上的斜坡,並在4~5次嘗試之後調整自己的步伐克服坡度。

在學會爬坡之前,這個機器人總是不斷跌倒,但它每秒可邁出3~4步長,比普通人類每秒1.5~2.5的步長要快;參與RunBot設計的德 國Goettingen大學研究人員Florentin Woergoetter表示:「它會不斷反覆摸索學習,需要經過約4~5次的跌倒才能學會。」

Woergoetter在《Computational Biology》期刊上發表了自己的研究成果,並把RunBot的學習過程與學走路的幼兒進行比較。和人類一樣,RunBot在直立行走時身體會稍稍前傾,而爬坡時步伐會更短一些。

RunBot能走路的關鍵之一在其“大腦”,它的紅外線眼和控制電路相連,引導它需要的時候改變步伐。之前的研究顯示,人體內的動力控制系統是由肌肉與脊髓神經之間交互作用的多個層級所組成,這個部份大多數是自主運作,但某些運動則需要更高層級的控制──即大腦。

Woergoetter表示,上述關係解釋了為何某些下半身癱瘓的病人使用輔具之後就可在跑步機上使用雙腳,也是RunBot研究的核心。他並指出,透過機器人研究進一步了解人體各個不同部份如何在行走時互相合作,對於改善醫療保健有著實質性的作用。

這類機器人研究不僅有助於為殘障者設計出更好的義肢,也能協助臨床治療師與病患一起對抗脊髓損傷等症狀,重新恢復運動能力。Woergoetter表示:「RunBot實際上就是人類直立行走的一個模型,將幫助我們進一步了解箇中奧祕並帶來更好的治療方法。」

Saturday, August 18, 2007

Word 內嵌 Visio 檔案,轉換成 PDF 檔案圖形亂掉的解決方案

當您轉換 Word 2002 文件, 包含 Visio 2002 或 Visio 2003 圖形轉換成 PDF 格式是在圖形文字會顯示不正確

解決方案

如果要解決這個問題, 將 PDF [ 列印品質 ] 選項設定成 600 dpi 設定值。 如果要執行這項操作,請依照下列步驟執行。:

1. 如果程式正在執行結束 Visio 2003 或 Visio 2002、 Word 2002 及 Adobe Acrobat。
2. 按一下 [ 開始 ] 按一下 [ 執行 ] 在 [ 開啟 ] 方塊, 鍵入 印表機控制項 , 然後按一下 [ 確定 ] 。
3. 以滑鼠右鍵按一下 Adobe PDF , 並按一下 [ 列印喜好設定 ] 。
4. 按一下 [ 版面配置 ] 索引標籤, 及 [ 進階 ] 。
5. 在 [ 進階 PDF 轉換程式進階選項 ] 對話方塊, 再展開 圖形 , 及 [ 列印品質 ] 。
6. 600dpi , 請按一下及兩次 [ 確定 ] 。
7. 啟動 Word 2002、 開啟該文件, 及再列印文件, 以 Adobe Acrobat。 若要列印繪圖以 Adobe Acrobat:
a. 在 Word 2002, 按一下 [ 檔案 ] 功能表上 [ 列印 ] 。
b. 按一下 [ 在 [ 名稱 ] 方塊, Adobe PDF 然後再按一下 [ 確定 ]
c. 在 另存新 PDF 檔 ] 對話方塊, 指定檔案名稱及您要儲存 PDF 檔案, 位置及 [ 儲存 ]。
沒想到問題是在 Visio 與 PDF 無法在列印品質上取得共識啊~
http://support.microsoft.com/kb/892955/zh-tw?spid=2963&sid=480

Friday, August 10, 2007

Using Embedded Linux in a reconfigurable high-res network camera

Using Embedded Linux in a reconfigurable high-res network camera by Andrey Filippov (Dec. 3, 2002)

Background

About a year ago I wrote an article which was published by LinuxDevices.com, and after it was mentioned on Slashdot my company (Elphel Inc.) was flooded with inquiries regarding general purpose network cameras, rather than the "high speed gated intensified" ones I wrote about. Also, the Model 303 network camera I wrote about, being high resolution, was rather slow -- the ETRAX100LX requires nearly 5 seconds for JPEG compression of a 1280x1024 color frame.



The Model 303 High Speed Gated Intensified Camera


My intention to increase camera frame rate was mentioned in the "TODO" section of the previous article, but the way to actually do that turned out to be very different from what I had anticipated. I decided not to use the JPEG-2000 compressor chip from Analog Devices. Nor did I make use of the new ETRAX multi-chip module from Axis Communications, as I wanted more memory (both SDRAM and Flash) than Axis put into the MCM version of its ETRAX controller. Also, in the new camera there was no place for a Quicklogic FPGA that I had intended to use for fixed-pattern noise elimination; this function needs 10 times less resources than image compression, and definitely fits in the same FPGA.

Instead, what I began to contemplate was . . .

An Open Source reconfigurable camera

I first investigated the possibility of using a large enough reprogrammable FPGA to be able to handle basic image acquisition tasks, fixed-pattern noise elimination, and image compression (i.e. baseline JPEG), without slowing down a sensor (~20 MHz pixel rate). An additional goal was to be able to use free FPGA development software, so it would make sense for me to post Verilog sources so that users would be able not only to build the camera software from sources but to do the same with the hardware (FPGA) part.

Incidentally, I'm not sure if it still makes sense to call it "hardware", as you do not even need to open the case to modify it. But there are at least two arguments that it still is hardware: (1) it's easy to fry the thing, by installing the wrong code in the FPGA (I had to hold my finger on the chip while first debugging the download process); and (2) the speed -- namely, a nearly 100x increase in compression performance and the fact that my Athlon-700 based PC is about 2.5 times slower in decoding than the camera FPGA in encoding (both require approximately the same amount of calculations) and the FPGA does not have any heat sink and is just slightly warmer than the environment.

Picking an FPGA

It was not difficult to find a good FPGA candidate. The latest member of Xilinx's low-cost Spartan IIe FPGA family -- a 300K gates XC2S300E chip (see note below, for an update). Plus, they have free ISE Webpack development software available for download that worked fine for the design and was able to make use of 98% of the chip's resources.

Unfortunately, the free version of the 3rd party simulator Xilinx included with their free development software package proved useless for my purposes, as its 500 line limit is not serious for simulating such a design. I do not think this is a real problem for the Linux community, since some of the Open Source simulators can probably be combined with the Xilinx ISE for Linux.

Before starting an actual design, I had to evaluate whether the JPEG compressor and other required circuitry would fit into the selected FPGA (I did not have prior experience with Xilinx devices). So I looked for commercial IPs and found that they really do exist (although they're rather expensive), and thereby determined that the chip should handle the job.

I also located an XAPP610 application note which includes source code for an 8x8 DCT core that is fast enough and uses less than 30% of the chip (I later found out that I had to modify it).

Architecture of the Model 313

I didn't get around to really starting the new design until August, at which time I downloaded the Xilinx development software and designed the schematic and PCB layout for the Model 313 camera, making it exactly the same physical dimensions as the old one.


Block diagram: Model 313 Reconfigurable Network Camera

(click to enlarge)


Together with the new FPGA came some other components . . .
  • 16MB SDRAM memory, connected directly to the FPGA so image processing does not reduce CPU bus bandwidth.

  • multi-channel programmable clock: its 20 MHz crystal oscillator output and one of the three PLLs (25MHz) are used to drive ETRAX100LX and Ethernet transceiver respectively, leaving the two other PLLs for FPGA flexible clocking. This Cypress CY22393FC part combines EEPROM memory (so the right frequencies will be applied to the CPU and network transceiver upon power up) and the I2C-compatible interface making it possible to provide an extra degree of flexibility to a reconfigurable FPGA.
The SRAM-based FPGA is configured using the bit-stream file that is generated by the Xilinx development software and stored in the camera flash memory. It is transferred to the chip using just 4 pins of the ETRAX general purpose interface port which is connected to the dedicated JTAG pins of the XC2S300E. It takes just a single line in one of the init scripts ("cat /etc/x313.bit > /dev/fpgaconfjtag") and a fraction of a second to bring it to life.

The FPGA code is designed around a four channel SDRAM controller. It uses internal "Block RAM" embedded memory (there are 16 of 4096 bit blocks in the XC2S300E chip) for ping-pong buffering of each channel. The controller provides interleaved access to the following channels . . .
  • Channel 0 -- raw or processed, 8 or 16 bits per pixel data from the sensor to the memory. Data is arranged in horizontal 256 pixel lines (128 for 16-bit data). It is also possible to write partial blocks (last in a scan line).

  • Channel 1 -- used to read from the memory per-pixel calibration data prepared by software in advance. For each pixel, there is an 8-bit value to subtract from the 10-bit sensor data. This data may be prescaled by 1, 2, or 4. The other byte contains sensitivity calibration, so depending on a global prescaling factor each pixel value may be individually adjusted in the +/- 12.5%, +/-25% or +/- 50% range.

  • Channel 2 -- provides data for the JPEG encoder. For the 4:2:0 encoding where two color components (Cb and Cr) have half of brightness resolution in both directions (that matches the Bayer color filters of the sensor) the minimal coding unit (MCU) is a square of 16x16 pixels that are later encoded as 4 8x8 blocks for the intensity (Y) component, and one 8x8 for each of Cb and Cr color ones (total 6 per MCU). If the data is encoded "live", the SDRAM controller provides a "ready" signal for this channel whenever there are at least 16 lines written by the sensor (channel 0).

  • Channel 3 -- provides CPU access to the SDRAM. Normally it is used to read out raw sensor data and write calibration data for the FPN elimination (that is calculated by the CPU from the raw pixel data).
The SDRAM controller runs at 75MHz (16-bit wide data), which is enough for a pixel rate of up to 25MHz and quasi-simultaneous channel operation.

The synchronization module provides the camera with the capability of registering short asynchronous events. The camera is designed to work with both Zoran (2/3, 1/2, and 1/3 in.) and Kodak (1/2 in.) imagers which can work only in continuous "rolling shutter" mode. In that case, an asynchronous event (i.e. a laser pulse) will likely be registered in two consecutive frames (part in the first, and the balance in the second), but since the camera is continuously writing data into a circular buffer it is possible to reconstruct the complete image. The synchronization module can work in 2 ways: using an external electrical signal, or just comparing average the pixel value in each scan line with some predefined threshold. This makes it possible to register short light pulses without any additional electrical connections to the camera.

The JPEG compression itself is performed in a chain of stages, some of them using embedded Block RAMs as buffers and/or data tables (quantization and Huffman). This function uses approximately two-thirds of the resources of the FPGA . . .
  • First stage -- the Bayer-to-YCbCr converter receives 16x16 pixel MCU tiles and writes simultaneously into two buffers: one, 16x16 for Y data; and the other, 2x8x8 for Cb and Cr data. In parallel, it calculates average pixel value (DC component) for each of them and subtracts it on the output to bypass the DCT conversion. On the output, data goes out from the buffers in 64-sample (9 bits signed) blocks, 4 for Y component followed by 1 Cb an 1 Cr. The next 3 stages (DCT, Quantizator/Zigzag reorderer, and RLL encoder) are designed to process data in blocks of 64 consecutive samples with arbitrary (>=0) gaps between them.

  • Second stage -- the 8x8 DCT converter is based on a Xilinx reference design described here (PDF download). I had to modify it to make it work in asynchronous mode, so each 64-sample block can start with arbitrary delay (0 or more cycles) after the previous one, and to increase the dynamic range (the test fixture in the reference design had just 6-bit -- not 8-bit -- input data). This stage uses a 2x64x10-bit ping-pong memory buffer between horizontal and vertical stages of the 2-d DCT. Output data comes in the down first, then right order for each 64-sample block.

  • Third stage -- the Quantizator/Zigzag reorderer receives 8-bit signed average block value (directly from Bayer-to-YCbCr converter stage) and combines it with 12 bit signed data output from the DCT. It uses Two Block RAMs - one to store 2 alternative 2-table sets of 64x12-bit quantization data, the other - to reorder the output data in zigzag order (starting from the lowest frequencies and going to the highest) as required by the JPEG standard. This reordering increases the probability of long sequences of zeroes that are encoded on the next RLL stage. Quantizator uses uses multiplication by 12 bits (together with >>12) instead of division by 8 bits. The software that generates the tables makes corrections to the divisor table (written in the JPEG file header) so that for high divisor values they match the FPGA multiplicands). Quantization tables are written by the CPU prior to compression.

  • Fourth stage -- the RLL encoder is the first to break uniform 64-cycle long data packets. It combines the data output from the quantizator with the number of preceding zero-value data samples. This stage also maintains the previous DC value for each component (Y, Cb and Cr) and sends out the difference from the previous instead of the value itself for DC components.

  • Fifth stage -- the Huffman encoder uses 256x16bit FIFO for the input data it receives from RLL stage. Three more Block RAM modules (2x256x20) are required to store 2 sets of Huffman tables (one for Y, and the other - for Cb and Cr components). In each output 20-bit word 4 MSBs specify the number of bits to send, and the 16 LSBs - the data bits to send. The DC Huffman tables are rather short so they are stored in unused parts of the AC tables.

  • Sixth stage -- the bit stuffer receives number of bits to send and the data to send, combines them into continuous bit stream and formats into 16-bit output words. It also inserts 0x00 bytes after each 0xff, as the 0xff is a prefix to the marker in the JFIF data stream.
The output encoded data goes to a 256x16 FIFO and then is transferred to the system memory using CPU DMA channel as 32-bit long words.

Results and plans

The code compiles into 98% of the FPGA's resources. It takes about twenty minutes to compile on my 700 MHz Athlon PC. And it works -- and works really fast! The compressor works at the full sensor rate (15 fps @1280x1024), and I can get 12 fps (some frames are still skipped) of the Quicktime format clips saved on the PC. There are a couple things that need to be cleaned up to fix that frame skipping, and then the camera will provide 15fps at 1280x1024 pixels, 60 fps at 640x480 pixels, and 240 fps at 320x240 pixels over the LAN connection.



Model 313 Reconfigurable Network Camera


There is no video streaming server software in the camera yet. It can only provide Quicktime clips of some predefined length (although it is possible to make that length really big). To view the clips live (before they are completely transferred) all the index information is provided before the actual video data, so each JPEG frame is padded to make them all the same size. To make the size of the padding smaller (and make most of the frames fit) the JPEG compression quality is adjusted after each frame.

Incidentally, on November 18, 2002, Xilinx announced availability of two new members of Spartan IIe series, with 600K and 400K gates. The 600K uses a bigger package, whereas the 400K gates version has the option of matching the pinout of the XC2S300E currently used in the model 313 camera, so it can be used in the camera without any schematic of PCB changes. Using this device, I believe it will be possible to implement the full frame MPEG encoder.

Here is a product description of the resulting camera . . .
About the Elphel Model 313 Reconfigurable Network Camera

There are many network cameras (cameras that can serve images/video without computer) on the market today. Some can provide high frame rate video, but limited to 705x480 pixels or less. There are even some high-resolution (megapixel) network cameras, but they usually need one second or longer to compress a full size image.

The Model 313 can do both. It is a 1.3 megapixel network camera and it can serve full size images really fast -- at 15 frames per second. High resolution may be very useful for security applications: for example, a single camera with a wide angle lens placed in the corner can see the whole room with the same quality as a narrow angle NTSC camera placed on a pan/tilt platform; and it can see it all at the same time, without any need of scanning.

Full resolution high frame rate even makes it possible not to use "digital pan-and-tilt" (sending out just a subwindow of the whole frame), the usual way to overcome the slow operation of high resolution network cameras.

The Model 313 camera is powered by 48VDC through the LAN cable, compliant to the IEEE 802.3af standard. This voltage makes possible to use four times longer cables to the camera than when using 24VDC power, and 16 times longer than 12VDC. Such lower voltages (not IEEE 802.3af compliant) are still used in some powered-over-LAN cameras.

All of the camera's embedded software and FPGA bitstream are stored in the camera flash memory, which can be upgraded through the Internet. Unlike the very dangerous procedure of rewriting flash memory with BIOS in a PC (if it was a wrong file or the power went off during flashing, the motherboard will likely be wasted), the Model 313 camera uses an important feature of the Axis ETRAX100LX 32-bit CPU which has an internal bootloader from the LAN that does not depend on the current flash memory data, so it is always possible to start over again with camera software installation.

Another important feature for developers is that both the embedded software and FPGA hardware algorithms are open source. Four levels of customization of the camera are thereby possible . . .
  1. Modification of the user interface using web design tools -- The camera has three file systems that makes it easy and safe to modify preinstalled web pages and be able to restore everything back if something went wrong.

  2. Applications written in C -- It is possible to compile C code on a computer running Linux after installing software from the downloads page (and links from there). The executable file may be transferred to the camera using ftp to RAM disk or a flash memory file system (jffs). That user application may have CGI interface, and can respond to http requests from the web browser.

  3. Adding (or modifying) drivers to the camera operating system -- This will require building the new OS kernel and there are two ways to try it on the camera: boot the camera from the LAN with the new kernel (it will not change anything in the camera flash memory, so just turning it off and back on will restore initial software); or flashing it instead of original one (in that case, after power cycling camera will always boot with the new system).

  4. FPGA modification that gives full control over the power of the reconfigurable computing in the camera -- This level requires different tools: FPGA development software from Xilinx (free for download available), and the camera sources posted on Elphel's website.



About the author: Andrey N. Filippov has a passion for applying modern technologies to embedded devices, especially in advanced imaging applications. He has over twenty years of experience in R&D, including high-speed high-resolution, mixed signal design, PDDs and FPGAs, and microprocessor-based embedded system hardware and software design, with a special focus on image acquisition methods for Laser Physics studies and computer automation of scientific experiments. Andrey holds a PhD in Physics from the Moscow Institute for Physics and Technology. This photo of the author was made using a Model 303 High Speed Gated Intensified Camera.



NOTE: A version of this article translated into Russian is available here.



Related stories: