2011年2月15日 星期二

宏達電Android大軍挺進平板電腦 強打雲端服務

宏達電Android大軍挺進平板電腦 強打雲端服務
2011/02/16-沈勤譽  

宏達電在全球行動通訊大會(MWC)上發表6款Android新產品,包括首款平板電腦HTC Flyer、3款智慧型手機及2款搭載Facebook專屬按鍵的中階手機,全數搭載HTC Sense使用介面,其中HTC Flyer首度搭配HTC Watch行動影片串流服務、onLive行動遊戲等雲端服務。
宏達電15日在MWC舉行大型記者會,一口氣發表6款產品,創下史上紀錄,預計第2季問世。宏達電執行長暨總經理周永明表示,2010年第4季全球智慧型手機銷售量首度突破1億支大關,並首度超越PC,智慧型手機已經從nice to have變成must have,2011年仍將是智慧型手機相當令人興奮的1年。
從手機到平板,從Android、WP 7到Facebook,不論從機種到內建的作業系統,甚至是價位,宏達電今年在MWC展現出更強大的企圖心。沈勤譽攝
   
他進一步說,宏達電2010年智慧型手機銷量達2,500萬支,成長超過1倍,是公司有史以來表現最好的1年,品牌認知度在18個月內,從13%提升到50%,成長達4倍之多,但宏達電不會停止創新的腳步,我們正打造新版本的HTC Sense及HTC Sense.com,可支援超級手機(superphone)及更高畫質、更大螢幕的產品。
宏達電一如預期,發表首款Android平板電腦HTC Flyer,預計第2季在全球展開銷售,Vodafone已經確定採購。周永明表示,我們不想做me too產品,一直在思考如何打造革命性的平板電腦,因此花了很多時間去瞭解消費者需要何種硬體、使用經驗及內容,希望帶給消費者最佳的使用經驗及豐富的內容。
他強調,當我們觀察人們使用智慧型手機、電腦和其他科技產品時,發現平板電腦提供前所未有、更個人化、更有效益的使用經驗,隨著愈來愈多的人們習慣攜帶多個無線產品來滿足不同需求,我們決定往新領域邁進,發展新機會。
HTC Flyer由鋁金屬一體成形打造,具備7吋螢幕、1.5Ghz高速處理器與HSPA+無線傳輸功能,搭載HTC Sense提供超炫的3D首頁,並以獨特的轉盤形式呈現使用者最喜愛或最重要的內容,另外支援Adobe Flash 10及HTML 5,提供網路瀏覽體驗。
HTC Flyer除了具備觸控操作外,還搭載全新的HTC Scribe技術,將傳統的筆記轉化成在螢幕上書寫的方式,使用者可在裝置上隨意寫筆記、畫圖、簽合約、或在網頁及照片上寫字;另外,Timemark功能可邊聽錄音邊寫筆記,只要輕點筆記上的字,就會自動播放與文字呼應的錄音段落,筆記功能也與行事曆整合,收到會議提醒時,系統會詢問是否要開啟新的筆記;進行週期性會議前,也會詢問是否要回到上次會議筆記記錄,另外也內建筆記應用軟體與服務的領導業者Evernote的同步功能。
宏達電也積極布局雲端服務,HTC Flyer將搭配HTC Watch行動影片串流服務及onLive行動雲端遊戲上線。在HTC Watch影片下載服務方面,可輕鬆搜尋最新的電影及影片,並隨選下載數百部高畫質(HD)電影,在線上觀看的同時也能立即回復重播;至於OnLive行動雲端遊戲,使用者可使用HTC Flyer,或透過無線寬頻連結傳輸到電視上,無須下載就可玩線上遊戲。
另外,宏達電也發表HTC Desire S、HTC Wildfire S、HTC Incredible S等3款Android新機,以及首度搭載Facebook專屬按鍵的HTC Chacha及HTC Salsa,全部搭載Android 2.4作業系統,預計第2季在歐洲及亞太市場問世。
HTC Desire S承襲HTC Legend一體成型鋁金屬設計,搭載高通(Qualcomm)1GHz Snapdragon MSM8255處理器、3.7吋WVGA螢幕與前、後雙攝影機,HTC Wildfire S則是瞄準大眾市場的機種,具備3.2吋HVGA螢幕及500萬畫素自動對焦相機,HTC Incredible S則具備4吋WVGA Super LCD螢幕與環場音效、800萬畫素雙閃光燈相機及DLNA技術,使用者能將手機上的影片、相片與音樂輸出至電視。
另外,HTC ChaCha及HTC Salsa則是瞄準中階市場,主打Facebook社群網路功能,只要單鍵即可開啟Facebook的多項重要功能,並與HTC Sense使用經驗整合。

宏達電行銷長王景弘表示,Facebook全球有5億用戶,其中透過行動裝置使用的用戶則達2億,我們與Facebook合作多年,希望為廣大的消費性市場提供理想的社群網路手機,能隨時隨地與朋友連結及分享,撥打電話時會自動顯示受話友人的最新狀態,並可上傳照片、分享歌曲、在喜愛的地點簽到等。

2011年2月8日 星期二

PHOTONIC FRONTIERS: GESTURE RECOGNITION: Lasers bring gesture recognition to the home

PHOTONIC FRONTIERS: GESTURE RECOGNITION: Lasers bring gesture recognition to the home


Jan 25, 2011

Gesture-recognition technology is reaching the consumer market, thanks to new laser-based techniques that cut costs and improve system performance so it can be used to control video games and home televisions.

JEFF HECHT, contributing editor

New laser and optical systems are making gesture recognition a consumer technology. In late 2010, Microsoft (Redmond, WA) brought gesture recognition to video games when it introduced the Kinect motion controller for its Xbox gaming system. Gesture-recognition controls for televisions and set-top boxes using different laser technology debuted in January 2011 at the Consumer Electronics Show in Las Vegas.

Efforts to develop gesture-recognition techniques for human communication with computers go back to the 1990s, but applications have been slowed by the need for expensive optical and electronic equipment and sophisticated computer algorithms. Now improvements in sensors, optical systems, and computer technology have brought gesture recognition to the mass market. Priced around $150, Kinect was a hot seller during the holiday season. It also sparked enthusiasts to launch the OpenKinect project (http://openkinect.org) to develop open-source software to adapt Kinect. Meanwhile, other companies are introducing gesture recognition for control of home televisions and entertainment centers.

Basics and background
Gesture recognition is a complex task, requiring optics to record motion and pattern-recognition software to identify parts of the body, trace their motion, and separate the meaningful gestures from the background environment. Some development has focused on recognizing specific gestures such as sign language. Motion capture has become important in animating computer-generated characters in video games and movies. Typically many cameras track dozens of reflective markers worn by actors, then computers translate the data into motion of stick-figure skeletons and build three-dimensional characters around them. The results are impressive, but the process is very expensive.

Costs are coming down, however. At the MIT Computer Science and Artificial Intelligence Lab (Cambridge, MA), Robert Wang used a $60 webcam to record gestures made by people wearing thin fabric gloves, then translated the gestures for 3D manipulation of computer models.

For consumer gesture recognition, developers are turning to cameras that provide depth information. Stereo cameras viewing a scene from two different angles can reconstruct three-dimensional data, but they require bright light and high contrast between objects in the near and far field. Developers instead are focusing on time-of-flight cameras and structured light, which record 3D profiles of people and objects with light from low-power near-infrared laser diodes, says Sinclair Vass, senior director of marketing and operations at JDSU (Milpitas, CA). Filters block ambient light, so sensors pick up only the laser line, giving a cleaner signal. The two approaches differ considerably in detail.

Structured light
One approach, called “structured light,” illuminates a scene with a pattern such as an array of lines, and views the results at an angle (see Fig. 1). If the pattern is projected onto a flat wall, the camera sees straight lines, but if it illuminates a more complex scene, such as a person standing in front of the wall, it sees a more complex profile. Digital processing can analyze profiles across the field to map the topography of the person’s face and body.




FIGURE 1. Structured light systems project grids or other patterns, which reveal the contours of complex objects when viewed from the side. The lines look straight when projected onto a wall, but are distorted with projected onto people, furniture, or other uneven surfaces.



Traditionally, structured light projects rectangular grids or arrays of lines, but powerful lasers are needed to provide a high signal to noise ratio. To get good performance in an eye-safe system, PrimeSense (Tel Aviv, Israel) uses a proprietary technique it calls light coding in the optics they supply for Microsoft’s Kinect system. “Our code is very rich in information, with almost zero repetition across the scene. It’s a code, not a grid, and this is what gives us reliability and replicability,” says Adi Berenson, vice president of business development and marketing at PrimeSense.

The illumination laser emits in the 800- to 900-nm range, invisible to the eye but in a range where silicon CMOS detectors have high quantum efficiency. A separate camera records color images. The optics record 640 × 480 pixel images, and depths of 0.8 to 3.5 m. Berenson says the resolution is about 16 times finer than competing time-of-flight systems and the hardware costs only a few tens of dollars. Microsoft software running on the Xbox interprets the raw gesture data and gets the game to respond, typically by having a character replicate the user’s movements on the screen (see Fig. 2). Users report typical response times of several seconds.


FIGURE 2. An infrared coded light pattern measures distance, as a separate camera records color images in the PrimeSense sensor system. The screen at top then replicates the motion of the children at bottom. (Courtesy of PrimeSense)



PrimeSense has made open-source drivers available through the Open Natural Interaction group (http://openni.org) and has big plans for the system. “Our future is natural interaction everywhere. TVs and set-top boxes are a straightforward next step,” says Berenson. Later he envisions applications in mobile devices, domestic robots, automobiles, and industry.

Time-of-flight camerasA time-of-flight camera works somewhat like a laser radar, with an IR laser firing short pulses and the camera timing the return time from pixels across its field of view. Several companies have developed commercial versions, including 3DV Systems (Tel Aviv, Israel) and Canesta (Sunnyvale, CA), both recently acquired by Microsoft; PMD Technologies (Siegen, Germany); and Optrima, the hardware division of Softkinetic-Optrima (Brussels, Belgium).

The system measures distance by comparing the phase of the modulated return pulses with those emitted by the laser (see Fig. 3). Separate sensors packaged together measure the time of flight from the IR laser pulses and record optical wavelength images for analysis (see Fig. 4). The architecture is very modular and does not require the calibration needed with structured light, says Michel Tombroff, CEO of Softkinetic-Optrima.


FIGURE 3. Time-of-flight system measures return time by observing the phase shift between returned and emitted pulses. (Courtesy of Softkinetic-Optrima)




“We can work with any 3D camera; as long as we get a clean, good-quality depth map, we’re happy,” says Tombroff.  Raw images are filtered before software classifies or segments the scene. The software also identifies and removes nonhuman objects, such as plants, chairs, and tables, so it can focus on the main person or persons making gestures. From the positions of the people in the field and view and the depth information, the software calculates locations of arms, legs, shoulders, and hands, then applies that information through a series of frames to recognize gestures with hands and feet, as well as motion such as dancing. The system also reconstructs a stick-figure skeleton, which can be used to animate an avatar on the screen.


FIGURE 4. The Optrima camera includes RGB and infrared imaging and acoustic pick-up. (Courtesy of Softkinetic-Optrima)




The big news from Softkinetic-Optrima is a gesture-recognition system announced last month at the Consumer Electronics Show. It’s designed as a gesture system to work with an Intel-based set-top box and controls displayed on a television screen. “We can do complete navigation, go to menus, click on things, increase volume, close a movie, and navigate through screens,” says Tombroff. It even includes an “air keyboard” so users can enter detailed instructions. “The principle is easy. The trick is to make it smooth, intuitive, and robust,” he said, although users will need a few minutes to accustom themselves to the controls.

Softkinetic-Optrima’s time-of-flight camera also can be used in other applications. One is to provide feedback in computer games developed by the Dutch company Silverfit (Alphen aan den Rijn, the Netherlands), which help elderly people move their limbs to restore the function of damaged muscles; gestures provide feedback. Other possibilities include military simulation systems and a golf-training system that uses a time-of-flight camera to monitor how players swing their shoulders.

Outlook“Both coded light and time-of-flight are viable, and most likely will coexist,” says Vass. Gaming is “a good strong commercial market, but pretty small relative to other places where this could be used, such as the living room.”

Kinect made a huge splash as the first gesture-recognition system in the consumer market. Video game consoles are a good starting point—their powerful processing chips have the computing power needed to attack the tough computing problems of gesture recognition for the game. Microsoft also tailored dancing and movement games that play to Kinect’s strengths; promotional videos show preteens have a blast. However, gamers say that present response times of several seconds are much too slow for many popular first-person shooter games, which require split-second response times.

Television controls are a potentially huge market that seems a logical next step for gesture recognition. Few channel surfers demand split-second response, and viewers have grown accustomed to on-screen menus. A well-designed gesture response system could be fun and easy to use—as well as saving the nuisance of searching for mislaid remotes. It likely will start as an option on high-end sets, or an accessory for gadget hounds and people who lose remotes.

“The market is really launching now,” says Tomboff. Vass expects gesture controls to go much farther, eventually interfacing with handheld devices, laptops, and miniprojectors to control video conferences.
Note:  Article commentary has been temporarily disabled as we undergo site maintenance.  It will return soon.  We apologize for any inconvenience.

iPhone5來了概念股大發

iPhone5來了概念股大發

  • 2011-02-09 01:18
  •  
  • 工商時報
  •  
  • 記者謝艾莉/台北報導
 
 
 蘋果電腦年度大會將於6月5日至9日召開,新一代智慧手機iPhone5可望問世,受惠相關概念股,包括鴻海、大立光、玉晶光、宸鴻、勝華、奇美電、國巨、美磊、正崴、晶技、可成、鴻準、台郡、嘉聯益、健鼎、欣興、華通、新普等業者商機最大。
 外界預估CDMA版本iPhone4將於今年第1季問世,法人預估單季iPhone4銷售量可望超越2,000萬支,超越去年第4季1,600~1,700萬支的規模。
 展望今年,法人預估iPhone銷售量有機會挑戰1億支,比去年的4,800萬支呈倍數成長,當中CDMA版本的iPhone占銷售量約20~25%。
 據外傳iPhone5尺寸為3.7吋,比iPhone4略大,當中也取消Home鍵的設計,觸控面板的尺寸將放大至3.7吋;由於iPhone 3G和iPhone 3GS外觀設計相當雷同,因此外界預估iPhone5多數設計均將沿用iPhone4。
 外界預估iPhone5將使用1.5GHz的蘋果A4處理器,鏡頭為800萬畫素、並支援1080P高畫質影片,同時採用畫質更好的OLED面板。
 以觸控面板廠來說,勝華、宸鴻、奇美等觸控面板模組廠第1季正在進行中尺寸iPad2的生產工作,預計至第2季時單月出貨量將明顯放大。
 為了讓手機的光學特性更好,蘋果要求iPhone4觸控面板模組與面板貼合的工作,這段由勝華、宸鴻負責,待觸控面板與面板貼合完成後,再交由鴻海進行組裝。
 此外,受惠於智慧型手機今年出貨量攀高帶動,法人指出,全球軟板廠未見大幅擴產動作,而台商在競爭能力持續提升的帶動下,全球市占率將跟著攀升,預估如嘉聯益等軟板廠將同步受惠。(相關新聞見A2)

2011年2月7日 星期一

英特爾晶片瑕疵... IC設計意外急單到

英特爾Sandy Bridge概念股
英特爾Sandy Bridge概念股

 英特爾搭載Sandy Bridge處理器的6系列Cougar Point晶片組在農曆年前發生瑕疵事件,由於英特爾修正後的晶片組將於2月底開始出貨,主機板及ODM/OEM廠為了拚出貨,意外擴大對IC設計廠追加電腦週邊晶片訂單,反而讓聯陽、智原、瑞昱、雷凌等業者受惠。
 英特爾1月初宣佈Sandy Bridge上市銷售,但開賣不到1個月,就發生6系列Cougar Point瑕疵事件,影響到的晶片組型號,包括桌上型電腦H67/P67等2款,以及應用在筆電上的HM67/HM65等2款。
 由於Sandy Bridge才開賣不久,除了H67/P67的主機板已有部份出貨需要回收,基於Sandy Bridge的Huron River平台筆電的銷售解禁日是訂於2月20日,所以除了主機板廠營收會受到影響,對筆電廠的營收衝擊仍不算大。
 英特爾表示,修正SATA傳輸介面問題的晶片組將於2月底開始出貨,預計整個生產鏈供貨將於4月恢復正常。業者認為,英特爾成功搶在筆電開賣日前將瑕疵品全部有效攔截,加上英特爾已向主機板廠、ODM/OEM廠表示將負責賠償到底,等於已將晶片組瑕疵事件的傷害降到最低。
 雖主機板及ODM/OEM廠2月及3月營收仍會受到不同程度影響,但為了搶先對手拿到新版本晶片組並提早出貨,主機板及ODM/OEM廠並不打算回收舊機拆解零組件,反而是對電腦周邊晶片廠追加急單,讓國內IC設計業者賺到意外之財。
 業者表示,英特爾晶片組瑕疵事件將導致需求遞延效應,因此今年電腦市場將不會出現五窮六絕淡季效應,而主機板及ODM/OEM廠為了在拿到修正版晶片組後,能夠搶先競爭對手出貨,因此除了要求代工廠要備足產能,也要求其它電腦週邊晶片供應商要有效供貨。
 法人預期,包括I/O晶片供應商聯陽、USB 3.0晶片組供應商智原、讀卡機及集線器IC供應商創惟、無線網路晶片供應商瑞昱及雷凌等電腦晶片供應商,均可望成為主機板及ODM/OEM廠追加急單效應下的受惠者。(相關新聞見A2)