返回首页

结课论文写作指导?

67 2023-12-22 06:24 admin

一、结课论文写作指导?

2、目录:目录是论文中主要段落的简表。(该部分一般不需要,视老师要求而定)

3、摘要:是文章主要内容的摘录,要求短、精、完整。字数200到300字为好,多不超过400为宜。

4、关键词或主题词:关键词是从论文的题名、摘要和正文中选取出来的,是对表述论文的中心内容有实质意义的词汇。关键词是用作机系统标引论文内容特征的词语,便于信息系统汇集,以供读者检索。 每篇论文一般选取3-6个词汇作为关键词,另起一行,排在提要的左下方。 关键词是经过规范化的词,在确定主题词时,要对论文进行主题,依照标引和组配规则转换成主题词表中的规范词语。

二、劳动课结课论文怎么写?

学校开展了劳动课,俗话说劳动者是最光荣的人,我们作为一个普通的劳动者在挥洒汗水的同时,我们还为校园营造了一个良好的干净的环境,这次劳动课,让我体会到了工人的艰辛,生活的压力,也加强了我们的劳动观念。帮助我们树立正确做人的态度。

三、专业课结课论文怎么写?

一般论文格式老师会提前说明的,按照老师的要求写就行。

四、结课论文标题怎么写?

(1) 论文标题-所有科学论文都有标题,不能“无标题”。论文题目一般在20字左右。主题的大小应与内容一致。尽量不设字幕,不得使用第一份报纸和第二份报纸。论文的标题是直截了当的,没有感叹号或问号,科学论文的标题不能用广告语言或新闻报道语言书写。

(2) 论文-签署的科学论文应由真实姓名和实际工作单位签署。主要体现责任归属和成就归属,便于后人跟踪学习。严格来说,论文作者是指负责选题、论证、文献综述、方案设计、立论方法、实验操作、数据整理、归纳总结、写作等全过程的人。他应该能够回答论文中的相关问题。现在参加工作的人经常被列出来,应该按照贡献的顺序排列。论文的签字须经本人同意。学术负责人可以被列为论文作者,也可以根据实际情况给予总体感谢。执行领导人通常不签字。

五、结课论文开头怎么写?

1. 引言 引言作为论文的开头,要以简短的篇幅介绍论文的写作背景和目的,其内容不仅要包括论文的主题和范围,还要说明研究问题的现实情况以及前人的研究概况,明确本研究

2. 正文

3. 结论 结论是一篇论文的收束部分,是以研究成果和讨论为前提,经过严密的逻辑推理和论证所得出的最后结论。学术论文结论的语言应严谨、精炼、准确、逻辑性强

六、结课论文要演讲吗?

结课论文一般是需要演讲的。

专科毕业论文和本科毕业论文一样,要求学生在撰写好论文之后,把它转换成ppt的格式,然后再在毕业前进行口头答辩,答辩的过程中,要求学生站在讲台上,将自己论文的内容进行阐述,然后下下面的导师会给出专业性的意见和建议

七、结课论文查重吗?

大学期末结课论文,老师一般都会查重的。有个别学校个别老师不查重,那也是个别现象。相比很多大学的老师还是非常认真负责的。他们本着为学校负责为国家负责的原则,会严肃认真地培养人才。严格要求学生的。

八、结课论文结束语?

论文的结束语通常是总结研究中得到的结论和对未来研究的展望。结论部分需要简洁明了地回答研究问题,总结研究成果和贡献,并指出不足之处,以及展望未来的研究方向。

致谢部分则是感谢所有协助完成研究的人员和机构,包括导师、同事、研究团队、实验室技术人员以及提供数据和资源的组织和个人。

感谢的语言需要真诚、诚恳、客观,并且需要遵循学术道德规范,不包含恶意推销、讽刺、抄袭等不当言论。

 最好在写作前对论文的结束语和致谢部分有所准备,以便更好地呈现研究的成果和感谢之情。

九、大学结课论文怎么写?

(1) 论文标题-所有科学论文都有标题,不能“无标题”。论文题目一般在20字左右。主题的大小应与内容一致。尽量不设字幕,不得使用第一份报纸和第二份报纸。论文的标题是直截了当的,没有感叹号或问号,科学论文的标题不能用广告语言或新闻报道语言书写。

(2) 论文-签署的科学论文应由真实姓名和实际工作单位签署。主要体现责任归属和成就归属,便于后人跟踪学习。严格来说,论文作者是指负责选题、论证、文献综述、方案设计、立论方法、实验操作、数据整理、归纳总结、写作等全过程的人。他应该能够回答论文中的相关问题。现在参加工作的人经常被列出来,应该按照贡献的顺序排列。论文的签字须经本人同意。学术负责人可以被列为论文作者,也可以根据实际情况给予总体感谢。执行领导人通常不签字。

(3) 论文-导言是论文中引人入胜的词语。写得好是很重要的。一篇好的论文介绍通常可以让读者了解你工作的发展过程以及你在这个研究方向上的位置。写下论文的基础、依据、背景和研究目的。我们应该回顾必要的文献,并指出问题的发展。文本应该简洁。

(4) 论文-材料和方法按照规定,如实书写实验对象、设备、动物和试剂及其规格,书写实验方法、指标、判断标准、实验设计、分组、统计方法,这些论文可以按照杂志的规定提交。

(5) 论文-实验结果应高度总结、仔细分析和逻辑呈现。我们应该去粗去精,去假存真,但我们不能主观选择,因为这不符合我们自己的意图,更不用说采取欺诈手段了。只有在技术不熟练或仪器不稳定期间获得的数据、在技术故障或操作错误期间获得的数据以及在不满足实验条件时获得的数据才能被丢弃。此外,发现问题时必须在原始记录上注明原因,在总结和处理过程中不能因异常情况而任意消除。丢弃此类数据时,应同时丢弃相同条件和相同时期的实验数据,而不仅仅是那些不同意自己观点的数据。

实验结果的编排应围绕主题,简化。有些数据可能不适合本文,但可用于其他目的。不要把它们放在一张纸上。应尽可能多地使用专业术语。如果你能使用表格,就不要使用图表。如果可以使用图表,最好不要使用图表,以免占用更多空间,增加排版难度。文本、表格和图不重复。实验中的意外现象、意外变化等特殊情况,应根据需要予以解释,不得随意丢弃。

(6) 论文讨论是论文的重点和难点。要放眼全局,抓住主要争议问题,从感性认识提高到理性认识。应对实验结果进行分析和推理,不得重复实验结果。我们应该集中讨论国内外相关文献中的结果和观点,以展示我们自己的观点,尤其是不应该避免相反的观点。在本文的讨论中,我们可以提出本课题的假设和发展思路,但自由裁量权应该是适当的,不能写成“科幻”或“想象”。

(7) 论文-结论或结论论文的结论应写清楚可靠的结果和结论。论文的内容要简洁,可以一篇接一篇地写。不要使用含糊不清的词语,如“摘要”。

(8) 这是论文中一个非常重要和有问题的部分。列出论文参考文献的目的是让读者了解论文研究命题的背景,这很容易找到。同时,它也尊重前人的劳动,有着准确的定位

一篇论文几乎从头到尾都需要引用参考文献。例如,在论文的引言中应引用最重要和直接相关的文献;方法中应引用所采用或参考的方法;在结果中,有时应引用与文献比较的数据;讨论中应引用与论文相关的各种支持或矛盾的结果或观点。

(9) 论文-导师、技术助理、特殊试剂或设备供应商、财务赞助者和提出重要建议的人是感谢的对象。书面致谢应当真诚、真实,不得庸俗。不要笼统地表示感谢,不要只感谢教授,还要感谢其他人。在写论文之前,你应该得到被感谢者的同意。你不能像拉虎皮一样拉大旗。

(10) 论文摘要:用200字左右的篇幅简要总结全文。总是把文章放在开头。摘要需要仔细书写,并具有吸引力。让读者阅读论文的摘要,就像他们看到了论文的缩影一样,或者在阅读了摘要后想继续阅读论文的相关部分。此外,还应给出几个关键词。关键词应写出真正的学术关键词词汇,而不是拼凑一般词汇。

十、机器人工程导论课的结课论文该怎么写?

机器人论文分享 共计11篇

Robotics相关(11篇)[1] Natural Language Robot Programming: NLP integrated with autonomous robotic grasping

标题:自然语言机器人编程:NLP与自主机器人抓取集成

链接:https://arxiv.org/abs/2304.02993

发表或投稿:IROS

代码:未开源

作者:Muhammad Arshad Khan, Max Kenney, Jack Painter, Disha Kamale, Riza Batista-Navarro, Amir Ghalamzan-E内容概述:这篇论文提出了一种基于语法的机器人编程自然语言框架,专注于实现特定任务,如物品 pick-and-place 操作。该框架使用自定义的 action words 字典扩展 vocabulary,通过使用谷歌 Speech-to-Text API 将口头指令转换为文本,并使用该框架获取机器人 joint space trajectory。该框架在模拟和真实世界中进行了验证,使用一个带有校准相机和麦克风的 Franka Panda 机器人臂进行实验。实验参与者使用口头指令完成 pick-and-place 任务,指令被转换为文本并经过该框架处理,以获取机器人的 joint space trajectory。结果表明该框架具有较高的系统 usability 评分。该框架不需要依赖 Transfer Learning 或大规模数据集,可以轻松扩展词汇表。未来,计划通过用户研究比较该框架与其他人类协助 pick-and-place 任务的方法。摘要:In this paper, we present a grammar-based natural language framework for robot programming, specifically for pick-and-place tasks. Our approach uses a custom dictionary of action words, designed to store together words that share meaning, allowing for easy expansion of the vocabulary by adding more action words from a lexical database. We validate our Natural Language Robot Programming (NLRP) framework through simulation and real-world experimentation, using a Franka Panda robotic arm equipped with a calibrated camera-in-hand and a microphone. Participants were asked to complete a pick-and-place task using verbal commands, which were converted into text using Google's Speech-to-Text API and processed through the NLRP framework to obtain joint space trajectories for the robot. Our results indicate that our approach has a high system usability score. The framework's dictionary can be easily extended without relying on transfer learning or large data sets. In the future, we plan to compare the presented framework with different approaches of human-assisted pick-and-place tasks via a comprehensive user study.

[2] ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments

标题:ETPNav:连续环境下视觉语言导航的进化拓扑规划

链接:https://arxiv.org/abs/2304.03047

发表或投稿:

代码:https://github.com/MarSaKi/ETPNav.

作者:Dong An, Hanqing Wang, Wenguan Wang, Zun Wang, Yan Huang, Keji He, Liang Wang内容概述:这篇论文探讨了开发视觉语言导航在连续环境中的人工智能代理的挑战,该代理需要遵循指令在环境中前进。该论文提出了一种新的导航框架ETPNav,该框架专注于两个关键技能:1) 抽象环境并生成长期导航计划,2) 在连续环境中避免障碍。该框架通过在线拓扑规划环境,预测路径上的点,在没有环境经验的情况下构建环境地图。该框架将导航过程分解为高级别规划和低级别控制。同时,ETPNav使用Transformer模型 cross-modal planner 生成导航计划,基于拓扑地图和指令。框架使用避免障碍控制器,通过 trial-and-error 启发式方法来避免陷入障碍物。实验结果表明,ETPNav在 R2R-CE 和RxR-CE 数据集上取得了10%和20%的性能提升。代码已开源,可访问 https://github.com/MarSaKi/ETPNav摘要:Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments. It becomes increasingly crucial in the field of embodied AI, with potential applications in autonomous navigation, search and rescue, and human-robot interaction. In this paper, we propose to address a more practical yet challenging counterpart setting - vision-language navigation in continuous environments (VLN-CE). To develop a robust VLN-CE agent, we propose a new navigation framework, ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments. ETPNav performs online topological mapping of environments by self-organizing predicted waypoints along a traversed path, without prior environmental experience. It privileges the agent to break down the navigation procedure into high-level planning and low-level control. Concurrently, ETPNav utilizes a transformer-based cross-modal planner to generate navigation plans based on topological maps and instructions. The plan is then performed through an obstacle-avoiding controller that leverages a trial-and-error heuristic to prevent navigation from getting stuck in obstacles. Experimental results demonstrate the effectiveness of the proposed method. ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is available at https://github.com/MarSaKi/ETPNav.

[3] Object-centric Inference for Language Conditioned Placement: A Foundation Model based Approach

标题:语言条件放置的以对象为中心的推理:一种基于基础模型的方法

链接:https://arxiv.org/abs/2304.02893

发表或投稿:

代码:未开源

作者:Zhixuan Xu, Kechun Xu, Yue Wang, Rong Xiong内容概述:这篇论文探讨了语言条件物体放置的任务,该任务要求机器人满足语言指令中的空间关系约束。以前的工作基于规则语言解析或场景中心的视觉表示,这些工作对指令和参考物体的形式有限制,或者需要大量的训练数据。本文提出了一种基于对象中心的 frameworks,使用 foundation 模型来 ground reference 物体和空间关系,从而进行物体放置,这种方法更高效、更可扩展。实验结果表明,该模型在物体放置任务中的成功率高达97.75%,并且只需要 ~0.26M trainable 参数,同时还可以更好地泛化到未知的物体和指令。同时,该模型使用仅有25%的训练数据,仍然击败了 top competing approach。摘要:We focus on the task of language-conditioned object placement, in which a robot should generate placements that satisfy all the spatial relational constraints in language instructions. Previous works based on rule-based language parsing or scene-centric visual representation have restrictions on the form of instructions and reference objects or require large amounts of training data. We propose an object-centric framework that leverages foundation models to ground the reference objects and spatial relations for placement, which is more sample efficient and generalizable. Experiments indicate that our model can achieve a 97.75% success rate of placement with only ~0.26M trainable parameters. Besides, our method generalizes better to both unseen objects and instructions. Moreover, with only 25% training data, we still outperform the top competing approach.

[4] DoUnseen: Zero-Shot Object Detection for Robotic Grasping

标题:DoUnseen:机器人抓取的零样本目标检测

链接:https://arxiv.org/abs/2304.02833

发表或投稿:

代码:未开源

作者:Anas Gouda, Moritz Roidl内容概述:这篇论文探讨了在没有任何数据或大量对象的情况下如何进行对象检测。在这种情况下,每个具体对象代表其自己的类别,每个类别都需要单独处理。这篇论文探讨了如何在“未知数量”的对象和“增加类别”的情况下进行对象检测,并且如何在不需要训练的情况下进行对象分类。该论文的主要目标是开发一种零-shot object detection系统,不需要训练,只需要拍摄几个图像就可以添加新的对象类别。论文提出了一种将对象检测分解成两个步骤的方法,通过将零-shot object segmentation网络和零-shot classifier组合在一起来实现。该方法在 unseen 数据集上进行了测试,并与一个经过训练的 Mask R-CNN 模型进行了比较。结果表明,该零-shot object detection 系统的性能取决于环境设置和对象类型。该论文还提供了一个代码库,可以用于使用该库进行零-shot object detection。摘要:How can we segment varying numbers of objects where each specific object represents its own separate class? To make the problem even more realistic, how can we add and delete classes on the fly without retraining? This is the case of robotic applications where no datasets of the objects exist or application that includes thousands of objects (E.g., in logistics) where it is impossible to train a single model to learn all of the objects. Most current research on object segmentation for robotic grasping focuses on class-level object segmentation (E.g., box, cup, bottle), closed sets (specific objects of a dataset; for example, YCB dataset), or deep learning-based template matching. In this work, we are interested in open sets where the number of classes is unknown, varying, and without pre-knowledge about the objects' types. We consider each specific object as its own separate class. Our goal is to develop a zero-shot object detector that requires no training and can add any object as a class just by capturing a few images of the object. Our main idea is to break the segmentation pipelines into two steps by combining unseen object segmentation networks cascaded by zero-shot classifiers. We evaluate our zero-shot object detector on unseen datasets and compare it to a trained Mask R-CNN on those datasets. The results show that the performance varies from practical to unsuitable depending on the environment setup and the objects being handled. The code is available in our DoUnseen library repository.

[5] Core Challenges in Embodied Vision-Language Planning

标题:具象视觉语言规划的核心挑战

链接:https://arxiv.org/abs/2304.02738

发表或投稿:JAIR

代码:未开源

作者:Jonathan Francis, Nariaki Kitamura, Felix Labelle, Xiaopeng Lu, Ingrid Navarro, Jean Oh内容概述:这篇论文主要讨论了在现代人工智能领域,计算机视觉、自然语言处理和机器人学等多个领域交叉的挑战,包括EVLP任务。EVLP任务是一个涉及身体感知、机器翻译和物理环境交互的复杂任务,它需要结合计算机视觉和自然语言处理来提高机器人在物理环境中的交互能力。这篇论文提出了EVLP任务的 taxonomic 总结,对当前的方法、新的算法、metrics、Simulators和数据集进行了详细的分析和比较。最后,论文介绍了新任务需要应对的核心挑战,并强调了任务设计的重要性,以促进模型的可泛化性和实现在真实世界中的部署。摘要:Recent advances in the areas of Multimodal Machine Learning and Artificial Intelligence (AI) have led to the development of challenging tasks at the intersection of Computer Vision, Natural Language Processing, and Robotics. Whereas many approaches and previous survey pursuits have characterised one or two of these dimensions, there has not been a holistic analysis at the center of all three. Moreover, even when combinations of these topics are considered, more focus is placed on describing, e.g., current architectural methods, as opposed to also illustrating high-level challenges and opportunities for the field. In this survey paper, we discuss Embodied Vision-Language Planning (EVLP) tasks, a family of prominent embodied navigation and manipulation problems that jointly leverage computer vision and natural language for interaction in physical environments. We propose a taxonomy to unify these tasks and provide an in-depth analysis and comparison of the current and new algorithmic approaches, metrics, simulators, and datasets used for EVLP tasks. Finally, we present the core challenges that we believe new EVLP works should seek to address, and we advocate for task construction that enables model generalisability and furthers real-world deployment.

[6] Learning Stability Attention in Vision-based End-to-end Driving Policies

标题:基于视觉的端到端驱动策略中的学习稳定性注意

链接:https://arxiv.org/abs/2304.02733

发表或投稿:

代码:未开源

作者:Tsun-Hsuan Wang, Wei Xiao, Makram Chahine, Alexander Amini, Ramin Hasani, Daniela Rus内容概述:这篇论文提出了使用控制 Lyapunov 函数(CLFs)来为 Vision-based 的 end-to-end 驾驶策略添加稳定性,并使用稳定性 attention 在 CLFs 中引入稳定性,以应对环境变化和提高学习灵活性。该方法还提出了 uncertainty propagation 技术,并将其紧密集成在att-CLFs 中。该方法在 photo-realistic Simulator 和 real full-scale autonomous vehicle 中证明了att-CLFs 的有效性。摘要:Modern end-to-end learning systems can learn to explicitly infer control from perception. However, it is difficult to guarantee stability and robustness for these systems since they are often exposed to unstructured, high-dimensional, and complex observation spaces (e.g., autonomous driving from a stream of pixel inputs). We propose to leverage control Lyapunov functions (CLFs) to equip end-to-end vision-based policies with stability properties and introduce stability attention in CLFs (att-CLFs) to tackle environmental changes and improve learning flexibility. We also present an uncertainty propagation technique that is tightly integrated into att-CLFs. We demonstrate the effectiveness of att-CLFs via comparison with classical CLFs, model predictive control, and vanilla end-to-end learning in a photo-realistic simulator and on a real full-scale autonomous vehicle.

[7] Real-Time Dense 3D Mapping of Underwater Environments

标题:水下环境的实时密集三维映射

链接:https://arxiv.org/abs/2304.02704

发表或投稿:

代码:未开源

作者:Weihan Wang, Bharat Joshi, Nathaniel Burgdorfer, Konstantinos Batsos, Alberto Quattrini Li, Philippos Mordohai, Ioannis Rekleitis内容概述:这篇论文探讨了如何在实时的情况下对资源受限的自主水下飞行器进行Dense 3DMapping。水下视觉引导操作是最具挑战性的,因为它们需要在外部力量的作用下进行三维运动,并且受限于有限的 visibility,以及缺乏全球定位系统。在线密集3D重建对于避免障碍并有效路径规划至关重要。自主操作是环境监测、海洋考古、资源利用和水下 cave 探索的关键。为了解决这一问题,我们提出了使用SVIIn2,一种可靠的视觉导航方法,并结合实时3D重建管道。我们进行了广泛的评估,测试了四种具有挑战性的水下数据集。我们的管道在CPU上以高帧率运行,与最先进的 offline 3D重建方法 COLMAP 相当。摘要:This paper addresses real-time dense 3D reconstruction for a resource-constrained Autonomous Underwater Vehicle (AUV). Underwater vision-guided operations are among the most challenging as they combine 3D motion in the presence of external forces, limited visibility, and absence of global positioning. Obstacle avoidance and effective path planning require online dense reconstructions of the environment. Autonomous operation is central to environmental monitoring, marine archaeology, resource utilization, and underwater cave exploration. To address this problem, we propose to use SVIn2, a robust VIO method, together with a real-time 3D reconstruction pipeline. We provide extensive evaluation on four challenging underwater datasets. Our pipeline produces comparable reconstruction with that of COLMAP, the state-of-the-art offline 3D reconstruction method, at high frame rates on a single CPU.

[8] Conformal Quantitative Predictive Monitoring of STL Requirements for Stochastic Processes

标题:随机过程STL需求的保形定量预测监测

链接:https://arxiv.org/abs/2211.02375

发表或投稿:

代码:未开源

作者:Francesca Cairoli, Nicola Paoletti, Luca Bortolussi内容概述:这篇论文探讨了预测监控(PM)的问题,即预测当前系统的状态是否满足某个想要的特性的所需的条件。由于这对 runtime 安全性和在线控制至关重要,因此需要 PM 方法高效地预测监控,同时提供正确的保证。这篇论文介绍了 quantitative predictive monitoring (QPM),它是第一个支持随机过程和 rich specifications 的 PM 方法,可以在运行时预测满足要求的 quantitative (即 robust) STL 语义。与大多数预测方法不同的是,QPM 预测了满足要求的 quantitative STL 语义,并提供了计算高效的预测 intervals,并且具有 probabilistic 保证,即预测的 STL robustness 值与系统在运行时的表现有关,这可以任意地覆盖系统在运行时的 STL robustness 值。使用机器学习方法和最近的进步在 quantile regression 方面的应用,这篇论文避免了在运行时进行 Monte- Carlo 模拟以估计预测 intervals 的开销。论文还展示了如何将我们的 monitor 组合成 compositional 的,以处理复杂的组合公式,同时保持正确的保证。这篇论文证明了 QPM 对四个不同复杂度离散时间随机过程的有效性和 scalability。摘要:We consider the problem of predictive monitoring (PM), i.e., predicting at runtime the satisfaction of a desired property from the current system's state. Due to its relevance for runtime safety assurance and online control, PM methods need to be efficient to enable timely interventions against predicted violations, while providing correctness guarantees. We introduce \textit{quantitative predictive monitoring (QPM)}, the first PM method to support stochastic processes and rich specifications given in Signal Temporal Logic (STL). Unlike most of the existing PM techniques that predict whether or not some property $φ$ is satisfied, QPM provides a quantitative measure of satisfaction by predicting the quantitative (aka robust) STL semantics of $φ$. QPM derives prediction intervals that are highly efficient to compute and with probabilistic guarantees, in that the intervals cover with arbitrary probability the STL robustness values relative to the stochastic evolution of the system. To do so, we take a machine-learning approach and leverage recent advances in conformal inference for quantile regression, thereby avoiding expensive Monte-Carlo simulations at runtime to estimate the intervals. We also show how our monitors can be combined in a compositional manner to handle composite formulas, without retraining the predictors nor sacrificing the guarantees. We demonstrate the effectiveness and scalability of QPM over a benchmark of four discrete-time stochastic processes with varying degrees of complexity.

[9] Real2Sim2Real Transfer for Control of Cable-driven Robots via a Differentiable Physics Engine

标题:通过可微分物理引擎控制缆索驱动机器人的Real2Sim2Real Transfer

链接:https://arxiv.org/abs/2209.06261

发表或投稿:IROS

代码:未开源

作者:Kun Wang, William R. Johnson III, Shiyang Lu, Xiaonan Huang, Joran Booth, Rebecca Kramer-Bottiglio, Mridul Aanjaneya, Kostas Bekris内容概述:这篇论文介绍了一种名为“Real2Sim2Real (R2S2R)”的 Transfer for Control of Cable-driven Robots方法,该方法基于一种不同的物理引擎,该引擎可以在基于真实机器人的数据上进行训练。该引擎使用 offline 测量物理属性(例如机器人组件的重量和几何形状),并使用随机控制策略观察轨迹。这些数据将用于训练引擎,并使其能够发现直接适用于真实机器人的 locomotion policies。该方法还介绍了计算接触点的非零梯度、一个用于匹配 tensegrity locomotion gaits 的 loss 函数以及一种 trajectory Segmentation 技术,这些技术可以避免在训练期间梯度评估冲突。在实际应用中,作者展示了多次 R2S2R 过程对于 3-bar tensegrity 机器人的 Transfer,并评估了该方法的性能。摘要:Tensegrity robots, composed of rigid rods and flexible cables, exhibit high strength-to-weight ratios and significant deformations, which enable them to navigate unstructured terrains and survive harsh impacts. They are hard to control, however, due to high dimensionality, complex dynamics, and a coupled architecture. Physics-based simulation is a promising avenue for developing locomotion policies that can be transferred to real robots. Nevertheless, modeling tensegrity robots is a complex task due to a substantial sim2real gap. To address this issue, this paper describes a Real2Sim2Real (R2S2R) strategy for tensegrity robots. This strategy is based on a differentiable physics engine that can be trained given limited data from a real robot. These data include offline measurements of physical properties, such as mass and geometry for various robot components, and the observation of a trajectory using a random control policy. With the data from the real robot, the engine can be iteratively refined and used to discover locomotion policies that are directly transferable to the real robot. Beyond the R2S2R pipeline, key contributions of this work include computing non-zero gradients at contact points, a loss function for matching tensegrity locomotion gaits, and a trajectory segmentation technique that avoids conflicts in gradient evaluation during training. Multiple iterations of the R2S2R process are demonstrated and evaluated on a real 3-bar tensegrity robot.

[10] ConDA: Unsupervised Domain Adaptation for LiDAR Segmentation via Regularized Domain Concatenation

标题:ConDA:通过正则化域连接进行LiDAR分割的无监督域自适应

链接:https://arxiv.org/abs/2111.15242

发表或投稿:ICRA

代码:未开源

作者:Lingdong Kong, Niamul Quader, Venice Erin Liong内容概述:这篇论文提出了一种基于 Regularized Domain concatenation 的 Unsupervised Domain adaptation 方法,用于将来自 source 领域的标记数据 learned 到 target 领域的 raw 数据上,以进行无监督 domain 转换(UDA)。方法主要包括构建一个混合 domain 并使用来自 source 和 target 领域的精细交互信号进行 self-training。在 self-training 过程中,作者提出了 anti-alias regularizer 和 entropy aggregator 来减少 aliasing artifacts 和 noisy pseudo labels 的影响,从而提高 source 和 target 领域的训练效率和 self-training 效果。实验结果表明,ConDA 在 mitigating domain gaps 方面比先前的方法更有效。摘要:Transferring knowledge learned from the labeled source domain to the raw target domain for unsupervised domain adaptation (UDA) is essential to the scalable deployment of autonomous driving systems. State-of-the-art methods in UDA often employ a key idea: utilizing joint supervision signals from both source and target domains for self-training. In this work, we improve and extend this aspect. We present ConDA, a concatenation-based domain adaptation framework for LiDAR segmentation that: 1) constructs an intermediate domain consisting of fine-grained interchange signals from both source and target domains without destabilizing the semantic coherency of objects and background around the ego-vehicle; and 2) utilizes the intermediate domain for self-training. To improve the network training on the source domain and self-training on the intermediate domain, we propose an anti-aliasing regularizer and an entropy aggregator to reduce the negative effect caused by the aliasing artifacts and noisy pseudo labels. Through extensive studies, we demonstrate that ConDA significantly outperforms prior arts in mitigating domain gaps.

[11] OpenVSLAM: A Versatile Visual SLAM Framework

标题:OpenVSLAM:一个通用的可视化SLAM框架

链接:https://arxiv.org/abs/1910.01122

发表或投稿:

代码:未开源

作者:Shinya Sumikura, Mikiya Shibuya, Ken Sakurada内容概述:这篇论文介绍了OpenVSLAM,一个具有高度易用性和扩展性的 visual SLAM框架。Visual SLAM系统对于AR设备、机器人和无人机的自主控制至关重要。然而,传统的开源视觉SLAM框架没有足够的灵活性,无法从第三方程序中调用库。为了解决这个问题,作者开发了一种新的视觉SLAM框架。该软件设计为易于使用和扩展,包括多个有用的功能和函数,用于研究和开发。摘要:In this paper, we introduce OpenVSLAM, a visual SLAM framework with high usability and extensibility. Visual SLAM systems are essential for AR devices, autonomous control of robots and drones, etc. However, conventional open-source visual SLAM frameworks are not appropriately designed as libraries called from third-party programs. To overcome this situation, we have developed a novel visual SLAM framework. This software is designed to be easily used and extended. It incorporates several useful features and functions for research and development.

顶一下
(0)
0%
踩一下
(0)
0%
相关评论
我要评论
用户名: 验证码:点击我更换图片