With our goal in mind, we started analyzing our assignment a second time, and started looking for information we could use. The goal of this phase specific was to form a project vision, and form the so called PvE, or Design Specifications.
First the link to the full report (in Dutch)
My research was aimed at the sub-questions agreed upon in the plan of approach. To answer these questions research has been done, and conclusions have been made.
The highlights of my research:
To create the idea of a humanoid robot butler, it is important to give the robot human features the user recognizes, for the butler part it is important to again give to robot features the user recognizes from a real life butler. Key features to accomplish this are a head, arms, and a body. To make sure the user sees a robot, and not some mechanical ball of wires and gears, it is important to give the robot iron looking plating.
When looking at this from a social point of view the robot must be able to comply with established rules of good service. It should be anticipating, reliable and most of all discrete. To further increase the idea of a social robot, it should be capable to interpret spoken commands, and react to these commands with "spoken" confirmation or a joke. Al tough this isn't part of the official assignment this could greatly improve our robot.
When designing a social/intelligent mechanical, ethics quickly become a raging discussion. To make sure we don't get caught up in this we concluded to avoid making decisions that could affect the user in any way.
How many different signals will the processor have to manage?
The output signals will range from 9 to 12 signals, at least 4 for pick & place, at least 2 for movement, if we use kinect add another output signal, and if we want our robot to "speak" another signal for the speaker can be added to the list.
The input signals will range from 6 to 10 signals, 2 for the line following, 3 or more from pick & place, 1 from kinect if we use it, and 1 from the battery level.
If we end up using kinect in our robot, the main processor will be taken from a laptop most likely, because the data stream from the 2 camera's kinect uses can get up to 30mb/s. With that comes the fact that the clocking speed of the processor has to be at least 100MHz to properly use the data supplied. If we end up not needing these high specs, an Arduino Uno embedded system will be used, or maybe even 2 arduino's linked together by an L2C bus.
The most used language to code robots is C++, or if a specific piece of hardware/software is used a different code, linux works with UNIX for example.
With the information i found in this phase I continue to do research into the possibilities of linux based systems, linking different processors together, and in what different ways electronic engineers from over the world build robots.
Until next time folks!
Geen opmerkingen:
Een reactie posten