Include /Applications/Webots/resources/Makefile.include QT_UTILS = /Applications/Webots/resources/projects/default/libraries/qt_utils GreaterThan(QT_MAJOR_VERSION, 4): QT += widgets My make file: CXX_SOURCES = entry_points.cpp pro file then to link to local installation of Qt rather than to /Applications/Webots/resources/projects/default/libraries/qt_utils Would it be more sensible to use local installation of Qt framework than the one that comes with Webots? How do I change the.Also it seems like I cannot find qapplication.h on the file system, is that normal? QApplication file content is too short.Applications/Webots/webots.app/Contents/Frameworks/amework/Headers/QApplication:1:10:įatal error: 'qapplication.h' file not foundĪll contnet of /Applications/Webots/webots.app/Contents/Frameworks/amework/Headers/QApplication: #include "qapplication.h" Applications/Webots/resources/projects/default/libraries/qt_utils/core/MainApplication.hpp:17: Though it you want more complex speech interactions, I'd suggest a full blown chatbot (Dialogflow, IBM Watson, et c.I am creating a Webots project on OSX, where I am including the following: #include If you plan to use a NAOqi QiChat chatbot, you could use the naoqi python apis to run that and just connect external speech to text and text to speech services to it. It would allow you to simulate the robot making gestures and moving through the world, but you would have to write your own custom code (ROS nodes, in python or C++) to process the audio, do speech recogition, and output speech (connected up to a mic and speakers you have for example). But it's really not designed for audio simulation either. The best currently available simulation environment for Pepper/NAO is the ROS Gazebo Stack. Webots is not supported any more, and I've never had any luck getting it set up. The text interaction can be used to test the flow of your dialogs, but won't allow you to test the nuances of speech recognition properly though. cannot be tested on a simulated robot - This module is only available on a real robot, you cannot test it on a simulated robot. When using a virtual robot, said text can be visualized in Choregraphe Robot View and Dialog panel. From the docs hereĪCAPELA, microAITalk and Nuance engines are only available on the real robot. The ALTextToSpeech and ALSpeechRecognition APIs don't work on the virtual robot unforunately. I'd appreciate any hint on whether Webots is even suitable for dialogues (seems to be mostly focussed on movement) or advice for other suitable simulations. Webots using ROS controller: There is no official support for Mac, and the recommended installation for ROS Kinetics has not yet worked for me.The robot and world simulation from naoqisim (which is also no longer sustained) seem to run fine. I could not figure out how to make the Speaker() class work. Webots (using Python controllers): The most promising approach so far, but there is basically no documentation on how to write NAO controllers.If I'm not missing something, dialogues are only simulated in a written chat - so I type the speech input, getting 'speech bubbles' as a response Choregraphe: The included simulation works fine, but is very restricted in its abilities.Speech recognition is the most important feature here, but simulation of other features that add more realism (like voice) would be appreciated too. The goal is to model dialogues of different complexity, also involving gestures. Due to Covid-19, I don't have access to a physical NAO and need to work with simulations.
0 Comments
Leave a Reply. |