Stay social with the Acrosser AMB-D255T3 Mini-ITX Board!

To further promote acrosser products, we will continue to enrich our web content and translate our website into more languages for our global audience. This month, Acrosserhas created a short film that highlights its Mini-ITX board, AMB-D255T3, using close-ups to capture its best features from different angles.

One fascinating feature of the AMB-D255T3 is its large-sized heatsink, rendering better thermal conductivity in the board. Secondly, the large amount of intersecting aluminum fins increases heat radiation area as well as heat-dissipation efficiency. The fanless design also eliminates the risk of fan malfunction, raising its product life expectancy. Without a fan, the single board computer AMB-D255T3 can perform steadily in a cool and quiet way.

Using the Intel ATOM D2550 as a base, the AMB-D255T3 was developed to provide abundant peripheral interfaces to meet the needs of different customers. For those looking for expansions, the board provides one Mini PCIe socket for a wireless or storage module. Also, for video interfaces, it features dual displays via VGA, HDMI or 18-bit LVDS, satisfying as many industries as possible.

In conclusion, acrosser’s AMB-D255T3 is a perfect combination of low power consumption and great computing performance. The complete set of I/O functions allows system integrators to apply our AMB-D255T3 to all sorts of solutions, making their embedded ideas a reality.

Product Information:
http://www.acrosser.com/Products/Single-Board-Computer/Mini-ITX-&-others/AMB-D255T3%E3%80%80(Mini-ITX-)/Intel-Atom-D2550-AMB-D255T3-(Mini-ITX)-.html

Follow us on Twitter!
http://twitter.com/ACROSSERmarcom

Contact us:
http://www.acrosser.com/inquiry.html

Embedded Virtualization: Latest Trends and Techniques

Data center Network security appliance architectures have been increasingly influencing all areas of embedded systems. Virtualization techniques are commonplace in enterprises and data centers in order to increase network security appliance capacity and reduce floor space and power consumption. From networking to smartphones, industrial control to point-of-sale systems, the embedded market is also accelerating the adoption of virtualization for some of the same reasons, as well as others unique to embedded systems.

Virtualization is the creation of software abstraction on top of a hardware platform and/or Operating System (OS) that presents one or more independent virtualized OS environments.

Enterprise and data center environments have been using virtualization for years to maximize server platform performance and run a mix of OS-specific applications on a single machine. They typically take one server blade or system and run multiple instances of a guest OS and web/application server, then load balance requests among these virtual server/app environments. This enables a single hardware platform to increase capacity, lower power consumption, and reduce physical footprint for web- and cloud-based services.

Within the enterprise, virtualized environments may also be used to run applications that only run on a specific OS. In these cases virtualization allows a host OS to run a guest OS that in turn runs the desired application. For example, a Windows machine may run a VMWare virtual machine that runs Linux as the guest OS in order to run an application only available on Linux.

How is embedded virtualization different?

Unlike data center and enterprise IT networks, embedded systems span a very large number of processors, OSs, and purpose-built software. So introducing virtualization to the greater Embedded Systems community isn’t just a matter of supporting Windows and Linux on Intel architecture. The primary drivers for virtualization are different as well. Embedded systems typically consist of a real-time component where it is critical to perform specific tasks within a guaranteed time period and a non-real-time component that may include processing real-time information, managing or configuring the system, and use of a Graphical User Interface (GUI).

Without virtualization, the non-real-time components can compromise the real-time nature of the system, so often these non-real-time components must run on a different processor. With virtualization these components can be combined on a single platform while still ensuring the real-time integrity of the system.

Technologies enabling embedded virtualization

There are some key capabilities required for embedded pc – multicore processors and VM monitors for OSs and processor architectures. In the enterprise/data center world, Intel architecture has been implementing multicore technology into their processors for years now. Having multiple truly independent cores and symmetrical network security appliance laid the groundwork for the widespread use of virtualization. In the embedded space, there are even more processor architectures to consider like ARM and its many variants, MIPS, and Freescale/PowerPC/QorIQ architectures. Many of these processor technologies have only recently started incorporating multicore. Further, hypervisors must be made available for these processor architectures. Hypervisors must also be able to host a variety of real-time and embedded pc within the embedded world. Many Real-Time Operating System (RTOS) vendors are introducing hypervisors that support Windows and Linux along with their RTOS, which provides an embedded baseline that enables virtualization.

Where are we in the adoption?

As multicore processors continue to penetrate embedded systems, the use of virtualization is increasing. More complex embedded environments that include a mix of real-time processing with user interfaces, networking, and graphics are the most likely application. Another feature of embedded environments is the need to communicate between the VM environments – the real-time component must often provide the data it’s collecting to the non-real-time VM environment for reporting and management. These communications channels are often not needed in the enterprise/data center world since each VM communicates independently.

LynuxWorks embedded board perspective

Robert Day, Vice President of Sales and Marketing at LynuxWorks (www.lynuxworks.com) echoed much of this history and current state of the embedded system and virtualization. “Network security appliance are nowhere near as diverse as in the embedded systems environment. In addition, embedded environments are constrained – the embedded board layer must deal with specific amounts of memory and accommodate a variety of CPUs and SoC variants.”

Day notes that embedded processors are now coming out with capabilities to better support embedded virtualization. Near-native performance is perhaps more important in embedded than enterprise applications, so these hypervisors and their ability to provide a thin virtualization and configuration layer, then “get out of the way” is an important feature that provides the performance requirements the industry needs.

Day references the embedded board hypervisors that run or depend on another OS – this kind of configuration simply doesn’t work in most embedded environments due to losing the near-native performance as well as potential compromise of real-time characteristics. Type 1 hypervisors – the software layer running directly on the hardware and providing the resource abstraction to one or more OSs – can work, but tend to have a large memory footprint since they often rely on a “helper” OS inside the hypervisor. For this reason, LynuxWorks coined the term “Type 0 hypervisor” – a type of hypervisor that has no OS inside. It’s a small piece of software that manages memory, devices, and processor core allocation. The hypervisor contains no drivers – it just tunnels through to the guest OSs. The disadvantage is that it doesn’t provide all the capabilities that might be available in the network security appliance.

Embedded system developers typically know the platform their systems run on, what OSs are used, and what the application characteristics are. In these cases, it’s acceptable to use a relatively static configuration that gains higher performance at the expense of less flexibility – certainly an acceptable trade-off for embedded systems.

Embedded board has been seeing embedded developers take advantage of virtualization to combine traditionally separate physical systems into one virtualized system. One example Day cited was combining a real-time sensor environment that samples data with the GUI management and reporting system.

Processors that incorporate Memory Management Units (MMUs) support the virtualized memory maps well for embedded applications. A more challenging area is the sharing or allocating of I/O devices among or between virtualized environments. “You can build devices on top of the hypervisor, then use these devices to communicate with the guest OSs,” Day says. “This would mean another virtual system virtualizing the device itself.” Here is where an I/O MMU can provide significant help. The IOMMU functions like an MMU for the I/O devices. Essentially the hypervisor partitions devices to go with specific VM environments and the IOMMU is configured to perform these tasks. Cleanly partitioning the IOMMU allows the hypervisor to get out of the way once the device is configured and the VM environment using that device can see near-native performance of the I/O.

LynuxWorks has seen initial virtualization use cases in the defense applications. The Internet of Things (IoT) revolution is also fueling the embedded virtualization fire.

Virtualization is one of the hottest topics today and its link to malware detection and prevention is another important aspect. Day mentioned that malware detection is built into the LynuxWorks hypervisor. This involves the hypervisor being able to detect behavior of certain types of malware as the guest OSs run. Because of the privileged nature of the hypervisor, it can look for certain telltale activities of malware going on with the guest OS and flag these. Most virtualized systems have some method to report suspicious things from the hypervisor to a management entity. When the reports are sent, the management entity can take action based on what the hypervisor is reporting. As virus and malware attacks become more purpose-built to attack safety-critical embedded applications, these kinds of watchdog capabilities can be an important line of defense.

Wind River embedded virtualization perspective

Technology experts Glenn Seiler, Vice President of Software Defined Networking and Davide Ricci, Open Source Product Line Manager at Wind River (www.windriver.com) say virtualization is important in the networking world.

A network transformation is underway: The explosion of smart portable devices coupled with their bandwidth-hungry multimedia applications have brought us to a crossroads in the networking world. Like the general embedded world, network infrastructure is taking a page from enterprise and data center distributed architectures to transform the network from a collection of fixed-function infrastructure components to general compute and packet processing platforms that can host and run a variety of network functions. This transformation is called Software Defined Networking (SDN). Coupled with this initiative is Network Functions Virtualization (NFV) – taking networking functionality like bridging, routing, network monitoring, and deep packet inspection and creating software components that can run within a virtualized environment on a piece of SDN infrastructure. This model closely parallels how data centers work today, and it promises to lower operational expense, increase flexibility, and shorten new services deployment.

Seiler mentions that there has been considerable pull from service providers to create NFV-enabled offerings from traditional telecom equipment manufacturers. “Carriers are pushing toward NFV. Wind River has been developing their technical product requirements and virtualization strategy around ETSI NFV specifications. This has been creating a lot of strong demand for virtualization technologies and Wind River has focused a lot of resources on providing carrier-grade virtualization and cloud capabilities around NFV.”

Seiler outlines four important tenets that are needed to support carrier-grade virtualization and NFV:

Reliability and availability. Network infrastructure is moving toward enterprise and data center architecture, but must do so and maintain carrier-grade reliability and availability.
Performance. Increasing bandwidths and real-time requirements such as baseband and multimedia streaming requires near-native performance with NFV.
Security. Intelligent virtualized infrastructure must maintain security and be resistant to malware or viruses that might target network infrastructure.
Manageability. Virtualized, distributed network components must be able to be managed transparently with existing OSS/BSS and provide the ability to perform reconfiguration and still be resilient to a single point of failure.
Wind River recently announced Wind River Open Virtualization. This is a virtualization environment based on Kernel-based Virtual Machine (KVM) that delivers the performance and management capabilities required by communications service providers. Service provider expectations for NFV are ambitious – among them being able to virtualize base stations and radio access network controllers – and to support these kinds of baseband protocols at peak capacity, the system has to have significant real-time properties.

Specifically, Wind River looked at interrupt and timer latencies from native running applications versus running on a hypervisor managing the VMs. Ricci mentioned Wind River engineers spent a significant amount of time developing with the KVM open source baseline to provide real-time preemption components with the ability to get near-native performance. Maintaining carrier-grade speeds is especially important for the telecom industry, as embedded board cannot be compromised.

refer to:http://embedded-computing.com/articles/embedded-virtualization-latest-trends-techniques/

Acrosser Introduces the Book-Sized Fanless Mini PC Video

To illustrate the high performance of AES-HM76Z1FL, acrosser created a short film, explicating the multiple features of our ultra thin embedded system. From its exterior look, this book-sized mini PC embodies great computing performance within its small form factor.

The arrangement of the I/O slot has taken product design and industrial applicability into consideration perfectly. Despite AES-HM76Z1FL’s small form factor, a wide selection of I/O ports including HDMI, USB, LAN, COMBO, GPIO and COM can be found on both sides of the product. Moreover, our model can be integrated horizontally or vertically, making it a flexible option that caters to many different industries. We are sure that these concepts make AES-HM76Z1FL a more feasible choice than other embedded systems.

The second part of the video demonstrates the 4 major applications of our AES-HM76Z1FL mentioned in our previous announcement: digital signage, kiosk, industrial automation and home automation. Aside from these four applications, Acrosser believes there are still many other applications for which the AES-HM76Z1FL would be useful.

Through the video, Acrosser was able to demonstrate the best features of the AES-HM76Z1FL, and allow our customers to easily see its power and versatility.

Finally, we would like to offer our gratitude to the vast number of applicants for the Free Product Testing Event. This program is easy to apply to, and still going on right now! Having reached the halfway mark for the event, many system integrators and industrial consultants have already provided plenty of interesting ideas for us. For those who have not applied the event, Acrosser welcomes you to submit your amazing proposals!

Product Information:
http://www.acrosser.com/Products/Embedded-Computer/Fanless-Embedded-Systems/AES-HM76Z1FL/Intel-Core-i3/i7-AES-HM76Z1FL.html

Contact us:
http://www.acrosser.com/inquiry.html

Enhanced Cybersecurity Services: Protecting Critical Infrastructure

Comprehensive cybersecurity is an unfortunate necessity in the connected age, as malwares like Duqu, Flame, and Stuxnet have proven to be effective Embedded PC instruments of espionage and physical sabotage rather than vehicles of petty cybercrime. In an effort to mitigate the impact of such threats on United States Critical Infrastructure (CI), the Department of Homeland Security (DHS) developed the Enhanced Cybersecurity Services (ECS) program, a voluntary embedded system framework designed to augment the existing cyber defenses of CI entities. The following provides an overview of the ECS program architecture, technology, and entry qualifications as described in an “on background” interview with DHS embedded pc officials.

At some point in 2007, an operator at the Natanz uranium enrichment facility in Iran inserted a USB memory device infected with the Stuxnet malware into an Industrial Control System (ICS) running a Windows Operating System. Over the next three years, the embedded system would propagate over the Natanz facility’s internal network by exploiting zero-day vulnerabilities in a variety of Windows OSs, eventually gaining access to the Programmable Logic Controllers on a number of Industrial Control Systems (PCSs) for the facility’s gas centrifuges. Stuxnet then injected malicious code to make the centrifuges spin at their maximum degradation point of 1410 Hz. One thousand of the 9,000 centrifuges at the Natanz facility were damaged beyond repair.

In February 2013, Executive Order (EO) 13,636 and Presidential Policy Directive (PPD)-21 ordered the DHS to develop a public-private partnership model to protect United States CI entities from cyber threats like Stuxnet. The result was an expansion of the Enhanced Cybersecurity Services (ECS) program from the Defense Industrial Base (DIB) to 16 critical industrial pc.

Enhanced Cybersecurity Services framework

At its core, the embedded system pc is a voluntary information-sharing framework that facilitates the dissemination of government-furnished cyber threat information to CI entities in both the public and private sectors. Through the program, sensitive and classified embedded system information is collected by agencies across the United States Government (USG) or EINSTEIN sensors1 placed on Federal Civilian Executive Branch (FCEB) agency networks, and then analyzed by DHS to develop “threat indicators”. DHS-developed threat indicators are then provided to Commercial Service Providers (CSPs)2 that, after being vetted and entering a Memorandum of Agreement (MOA) with DHS, may commercially offer approved ECS services to entities that have been validated as part of United States CI. The ECS services can then be used to supplement existing cyber defenses operated by or available to CI entities and CSPs to prevent unauthorized access, exploitation, and data exfiltration.

In addition, CSPs may also provide limited, anonymized, and industrial cybersecurity metrics to the DHS Office of Cybersecurity & Communications (CS&C) with the permission of the participating CI entity. Called Optional Statistical Information Sharing, this practice aids in understanding the effectiveness of the ECS program and its threat indicators, and promotes coordinated protection, prevention, and responses to malicious cyber threats across federal and commercial domains.

Enhanced Cybersecurity Services countermeasures the initial implementation of ECS, including two countermeasures for combating cyber threats: Domain Name Service (DNS) sinkholing and embedded pc e-mail filtering.

DNS sinkholing technology is particularly effective against malwares like Stuxnet that are equipped with distributed command and control network capabilities, which allows threats to open a connection back to a command and control server so that its creators can remotely access it, give it commands, and update it. The DNS sinkholing capability enables CSPs to prevent communication with known or suspected malicious Internet domains by redirecting the network connection away from those domains. Instead, CSPs direct network traffic to “safe servers” or “sinkhole servers,” both hindering the spread of the malware and preventing its communications with embedded pc cyber attackers.

The e-mail filtering capability is effective in combating cyber threats like Duqu, for example, which spread to targets through contaminated Microsoft Word e-mail attachments (also known as phishing), then used a command and control network to exfiltrate data encrypted in image files back to its creators. The e-mail filtering capability enables CSPs to scan attachments, URLs, and other potential malware hidden in e-mail destined for an entity’s networks and potentially quarantine it before delivery to end users.

Accreditation and costs for Enhanced Cybersecurity Services

The CS&C is the DHS executive agent for the ECS program, and executes the CSP security accreditation process and MOAs, as well as validation of CI entities. Any CI entity from one of the 16 key infrastructure sectors can be evaluated for protection under the ECS program, including state, local, tribal, and territorial governments.

For CSPs to complete the security accreditation process, they must sign an MOA with the USG that defines ECS expectations and specific program activities. The MOA works to clarify the CSP’s ability to deliver ECS services commercially while adhering to the program’s security requirements, which include the ability to:

Accept, handle, and safeguard all unclassified and classified indicators from DHS in a Sensitive Compartment Information Facility (SCIF) Retain employee(s) capable of holding classified security clearances for the purposes of handling classified information (clearance sponsorship is provided by DHS)
Implement ECS services in accordance with security guidelines outlined in the network design provided on signing of the embedded pc versions of MOA.

Privacy, confidentiality, and Enhanced Cybersecurity Services

“ECS does not involve government monitoring of private communications or the sharing of communications content with the government by the CSPs,” a DHS official told Industrial embedded systems.  Although CSPs may voluntarily share limited aggregated and anonymized statistical information with the government under the ECS program, ECS related information is not directly shared between customers of the CSPs and the government.

“CS&C may share information received under the ECS program with other USG entities with cybersecurity responsibilities, so long as the practice of sharing information is consistent with its existing policies and procedures. DHS does not control what actions are taken to secure private networks or diminish the voluntary nature of this effort. Nor does DHS monitor actions between the CSPs and the CI entities to which they provide services. CI entities remain in full control of their data and the decisions about how to best secure it.”

refer to:http://industrial-embedded.com/articles/enhanced-protecting-critical-infrastructure/

Machine-to-Machine (M2M) Gateway: Trusted and Connected Intelligence

The factory of the future will still have Programmable Logic Controllers (PLCs) and Human-Machine Interface (HMI) panels, but someone half a world away will likely be monitoring and controlling them. That person may be sitting at a desk watching over a global network of facilities or checking the latest production statistics from a smartphone. Either way, the vision of the “Connected Factory” is evolving from concept to reality, as the explosive growth in Machine-to-Machine (M2M) connections, mobile devices in the enterprise, and wireless data traffic shows.

Implementing this approach, however, is not simply a matter of connecting devices to Ethernet and wireless networks. The fundamentals must be right to ensure that facilities produce information that can be accessed, monitored, and controlled from anywhere.

Over the past 50 years, automation technology has evolved to the point that a plant manager for a global industrial manufacturing company can easily monitor and control devices from hundreds of miles away, rather than standing a few feet away from them. This level of control can be achieved in ways that may include:

Sitting at a desk in a centralized office
Watching video footage captured by a global network of connected cameras
Remotely troubleshooting a piece of equipment from a tablet
Checking the latest production statistics using a smartphone app
The progression of the “Industry 4.0” revolution means that more factories andindustrial plants will implement more networked devices that are able to collect data. This concept, which is also referred to as the “connected factory,” is transitioning from a ’what-if’ notion to present-day reality at overwhelming speed.

The flood of enabling technology has paved the way for automation to gain global prominence across a wide variety of industrial and manufacturing industries. Organizations are increasingly realizing that with automation they can produce better quality products, sustainably and efficiently, while keeping a closer check on production costs. Gartner forecasts that by the year 2020, there will be up to 30 billion devices connected with unique IP addresses, most of which will be products. In the industrial world, these devices will be equipment such as natural gas or wastewater treatment pumps, high-capacity scales, and other production machines.

While many global manufacturers are eager to realize the benefits of the Connected Factory, such as reduced operational costs and better visibility and control of assets, it is unrealistic and cost prohibitive for them to construct greenfield facilities or orchestrate a ’rip-and-replace’ of all legacy equipment. Instead, plant managers are better off leveraging industrially fluent communications devices and adapting the legacy sensors, Remote Terminal Units (RTUs), and communications protocols that have served them well for years in order to create modern, real-time reporting and control systems.

The three key requisites of the Connected Factory

Managing productivity and profitability is a key role of plant managers and engineers in world-class manufacturing operations. The first step towards achieving this in the 21st century factory is to implement the fundamentals of a successful Connected Factory. These fundamentals must be in place to ensure that factories are generating information that can be accessed, monitored, and controlled from anywhere.

To begin this process, manufacturers must do three things:

Enable devices to speak the same language
Rethink operational efficiencies so more devices can talk with each other
Provide a secure, seamless platform in which these devices can communicate
Come together: Devices that speak the same language

The challenge with integrating legacy equipment with the Connected Factory model is that it often uses older protocols or even serial links that don’t easily fit into the TCP/IP world. An organization’s engineers must first ensure that this equipment can speak the same language as newer devices.

Plant engineers often source network switches used to build industrial networks from the IT world, a decision that may make sense for higher level infrastructure, but one that essentially introduces technology that is not purpose-built for machine-level control systems. For example, a modern machine may have every component networked and may allow every conceivable piece of status information to be displayed on its HMIs, but the network switch itself – the failure of which could take down the entire machine – sits alone or is loosely integrated via expensive and seemingly incomprehensible SMNP drivers.

To avoid this scenario, manufacturers must use a complex combination of drivers to provide protocol compatibility, replace existing hardware with more complex devices, or choose advanced HMIs, protocol converters, and industrial-grade switches that offer industrial fluency and multi-protocol support.

The first two options add complexity and development costs to the system. The third – deploying equipment with native support for all required standards and protocols – provides a simpler solution.

Raise your voice: Enabling more devices to communicate

Connecting equipment that can’t easily be reached in remote or geographically rugged locations enables real-time information access and greatly enhances remote troubleshooting capabilities. It can also result in safer working conditions for the humans who must monitor, regulate, and troubleshoot this equipment. Think about the value of automated devices in an oil and gas facility, for example. This clear value proposition for remote connectivity is driving the current boom in cellular M2M connection. Consider Metcalfe’s law as it applies to the Connected Factory: the value of the network increases exponentially with the number of connected assets.

With this in mind, manufacturers must invest in issuing all remote assets a cellular connection. Cellular routers and modems now provide native support for industrial automation equipment and protocols, including models that support 4G network connectivity. These products enable two-way communications from facility to facility, and enable information exchange with remote assets, such as offshore platforms or unattended substations or pipelines.

Everyone’s invited: A better place for devices to connect

As manufacturers seek to assign an IP address to networked assets, one hurdle they often face is that the available bandwidth remains static in spite of the growing number of networkable devices and data points. When factoring in the hierarchical nature of the industrial world – with PLCs and HMIs grouped into machines, these machines grouped into cells, and these cells grouped into factories – assigning an IP address to every PLC and sensor can be a management nightmare.

But new approaches to network design and configuration can help plant managers take full advantage of the available connectivity and control. Instead of assigning individual IP addresses, for example, engineers can solve the problem by using a rugged appliance that manages communications with dozens of disparate devices (including sensors, PLCs, and HMIs) while serving as a single point of contact for the network.

What’s next for Industry 4.0?

The ability to seamlessly communicate with operators, control systems, and software applications combined with practical networking options and support for native features and protocols delivers exponential meaning to data extracted from industrial devices. In other words, the true value of Industry 4.0 and the Connected Factory isn’t derived from the sheer volume of connections; it comes from creating more meaningful connections and the competitive edge gained by the harmonious dialogue between devices and the humans managing them. These capabilities create the context to take automation and remote management to new levels, thereby making the Connected Factory a reality.

As part of the Industry 4.0 movement, the Connected Factory demands a new approach to the concept of factory automation. With the thoughtful integration of supporting components that are designed specifically for this goal, the ability to connect, monitor, and control will drive productivity well into the future.

 

refer to: http://embedded-computing.com/articles/elements-success-the-connected-factory-needs-flourish-2014/

Fanless Mini-ITX mainboard with Intel Atom Processor “Cedar Trial” D2550

acrosser Technology Co. Ltd, a global professional industrial and Embedded Computerprovider, announces the newMini-ITX mainboard, AMB-D255T3, which carries the Intel dual- core 1.86GHz Atom Processor D2550. AMB-D255T3 features onboard graphics via VGA and HDMI, DDR3 SO-DIMM support, PCI slot, mSATA socket with SATA & USB signals, and ATX connector for easy power in. AMB-D255T3 also provides complete I/O such as 6 x COM ports, 6 x USB2.0 ports, 2 x GbE RJ-45 ports, and 2 x SATA port.
AMB-D255T3 can support dual displays via VGA, HDMI or 18-bit LVDS. AMB-D255T3 has one MiniPCIe type slot and one PCI for customer’s expansion. The MiniPCIe slot works with SATA and USB signals that can be equipped with mSATA storage module.
AMB-D255T3 is certainly an excellent solution for applications that require powerful computing while still maintaining low-power consumption in a small form factor motherboard and has a complete set of I/O functions. Users can deploy the system solution with this fan-less mainboard easily. Ideally, it is a fast time-to-market weapon for system integrators.

Key features:
‧ Intel Atom D2550 1.86GHz
‧ 1 x DDR3 SO-DIMM up to 4GB
‧ 1 x VGA
‧ 1 x HDMI
‧ 1 x 18-bit LVDS
‧ 6 x USB2.0
‧ 6 x COM
‧ 2 x GbE (Realtek RTL8111E)
‧ 1 x PS/2
‧ 1 x KB/MS
‧ 1 x MiniPCIe slot
‧ 1 x PCI slot
‧ 2 x SATA ll
‧ 8-bit GPIO

Product Information:
http://www.acrosser.com/Products/Single-Board-Computer/Mini-ITX-&-others/AMB-D255T3 (Mini-ITX-)/Intel-Atom-D2550-AMB-D255T3-(Mini-ITX)-.html

Contact us:
http://www.acrosser.com/inquiry.html

Meet Acrosser at Embedded World 2014!


acrosser Technology, a world-leading Industrial computer manufacturer, announces its participation in Embedded World 2014 from February 25-27, 2014. The event will take place in Nuremberg, Germany. We warmly invite all customers to come and meet us in Hall 5, booth number: 5-305!

At Embedded World 2014, Acrosser Technology will showcase its NEW embedded system product, AES-HM76Z1FLand its In-Vehicle Computer, AIV-HM76V0FL. Both products will be displayed in LIVE DEMO, showing its stability and high performance to the audience. What’s more, Acrosser will select its most favored mini-ITX boards from among our loyal customers, being demonstrated as a featured zone inside the booth. Make sure you do not miss our mini-ITX collection!

For gaming applications, Acrosser will exhibit the All-in-One Gaming BoardAMB-A55EG1. The board features great computing and graphic performance, and high compatibility on multiple operating systems. If you are looking for a gaming system, do not miss our AGS-HM76G1. It is a cost-effective PC-based gaming solution that can be easily applied to your VLT, amusement, and slot machines.

In addition, Acrosser will also stress its focus on networking appliances. With a series of products being showcased, we are ready to be your solution provider! We look forward to making your embedded idea a reality, and we cordially invite you to visit our booth and discover our outstanding products.

Feel free to pay us a visit in Hall 5 at Booth 5-305!

Acrosser Technology Co., Ltd.
For more information, please visit to Acrosser Technology website
www.acrosser.com

Contact:http://www.acrosser.com/inquiry.html

Apply for our AES-HM76Z1FL Product Testing Event NOW!

acrosser Technology Co., Ltd., a world-leading industrial and Embedded Computerdesigner and manufacturer, is pleased to announce that our AES-HM76Z1FL Product Testing Event has officially begun! To experience AES-HM76Z1FL’s superb computing performance, acrosser welcomes all system integrators, from all industries, to join the event! The campaign will only last for 3 months and ends in March, 2014. So don’t hesitate to submit your application! Please click our event web page or look for the banner on our website!

So, are you ready to explore the excellence of Acrosser’s embedded products? To sign up for the AES-HM76Z1FL Product Testing Event, please click here, complete the on-line application form and submit! acrosser will review your eligibility upon receiving your request. There are only a limited of AES-HM76Z1FL models for this event, so we encourage you to apply early!

Once your application has been approved, Acrosser will send a confirmation e-mail and an AES-HM76Z1FL Product Release Form. Please double-check that the Product Release Form has the correct mailing information so that we can get the product to you in a timely manner. You will then receive free lease of our product for a duration of one month, starting immediately!

Please mark the date, and make sure to return the Feedback Sheet and the AES-HM76Z1FL model to Acrosser on time. Meanwhile, we will send a small gift back to your previous address as a closing of the event. If you are interested in placing an order after product testing, please contact our sales team for discount!

We are prepared to be amazed by your fascinating projects. With its small form factor and fanless design, AES-HM76Z1FL can be installed anywhere under multiple industrial projects. Apply for the event, and experience great computing performance!

Product Information:
http://www.acrosser.com/Products/Embedded-Computer/Fanless-Embedded-Systems/AES-HM76Z1FL/Intel-Core-i3/i7-AES-HM76Z1FL.html

Contact us:
http://www.acrosser.com/inquiry.html

Challenges with Android in the car

In 2012, for the first time in its 26-year history, the J.D. Power Auto Quality Study found that the embedded system is now the biggest source of problems in new cars. Therefore, OEMs are justifiably concerned with the reliability, stability, and security of Android.

Android’s extremely large source code base coupled with its open source development model results in extreme churn – literally thousands of edits per day across Android and its underlying Linux kernel. This guarantees a steady flow of vulnerabilities. A quick search of the U.S. CERT National Vulnerability Database turns up numerous vulnerabilities of varying severity for in-vehicle infotainment systems. Here is a sampling of the worst offenders:

We point these particular vulnerabilities out because they fall into the highest severity category of remote exploitability. They are used by hackers to root Android phones and tablets, and automotive manufacturers want to ensure that the same vulnerabilities do not threaten Android- or Linux-based infotainment systems.

Another concern with Android is driver/passenger safety. Automotive electronics architecture is in the midst of a major trend reversal: Instead of adding more and more processors for new functions, disparate functions are being consolidated into a smaller number of high-performance multicore processors in order to reduce size, weight, power, and component/wiring cost. Processor consolidation is leading safety-critical systems to be integrated with infotainment. The consolidation trend is aided by next-generation, performance-efficient multicore processor platforms, such as the “Jacinto” and OMAPprocessor families including TI’s OMAP 5 platform, which offers a dual-core, power-efficient ARM Cortex-A15 processing architecture.

Additionally, such mixed-criticality system consolidation, for example, includes OEMs looking to host real-time clusters, rear-view cameras, and Advanced Driver Assistance Systems (ADAS) within the center stack computer. Next-generation Android infotainment systems must ensure that applications and multimedia seamlessly interact with safety functions, and pose no risks to passengers.

 

refer to:http://embedded-computing.com/articles/the-future-android-vehicles/

The Reliable Software Developers’ Conference – UK, May 2014

Technology event organiser Energi Technical has announced that it will be launching “The Reliable Software Developers’ Conference”, scheduled for May 2014.

This one-day conference will provide an important forum for engineers and developers working in the development of safety critical systems and high availability systems. It is expected to attract software developers working in such industries as automotive, railway systems, aerospace, bankingmedical and energy. www.rsd-conference.co.uk

“In recent years, software has become so complex that ensuring safety and reliability is now a major challenge,” said Richard Blackburn, Event Organiser. “Many systems now have millions of lines of code and will handle enormous amounts of data. Further to this, modern computer based systems will make millions of decisions every second and also have to be immune to interference and unpredictable events. This event will look at the MISRA coding standards, debug tools and software testing tools that are available to assist software programmers and engineers seeking to develop reliable and safety critical
systems.”

The Reliable Software Developers’ Conference will be co-located with the 2014 UK Device Developers’ Conference. Both will be a one-day conference to be run in Bristol, Cambridge, Northern England and Scotland on May 20th, May 20rd, June 3rd and June 5th.

Delegates attending either event will have the opportunity to sit in on technical presentations and ½ day technical workshops and a attend a vendor exhibition of tools and technology for the development of real-time and embedded systems. www.device-developer-conference.co.uk

“Advanced Debug Tools, Code Test, Version Control, Verification Tools and Software Standards have been a growing feature of recent conferences, so it made sense to create a dedicated event,” said Richard. “There will be a lot expertise available to delegates, and the chance to meet a broad range of vendors of test technologies and tools, all under one roof.”

Developed in collaboration with MISRA (Coding Standards), the Reliable Software Developers’ Conference will feature a number of presentations in the morning, followed by a half-day technical workshop in the afternoon. The presentations will be free and open to delegates of both Conferences, but the half-day workshops will be subject to a charge of £75. Delegates will learn about developments in coding standards, test and verification tools and best practices and it will also be an opportunity to meet with many industry experts.

Refer to:http://embedded-computing.com/news/the-uk-may-2014/