Refine
H-BRS Bibliography
- yes (6) (remove)
Departments, institutes and facilities
- Institut für Cyber Security & Privacy (ICSP) (6) (remove)
Document Type
- Conference Object (4)
- Contribution to a Periodical (1)
- Preprint (1)
Year of publication
- 2020 (6) (remove)
Has Fulltext
- no (6) (remove)
Keywords
- AMD Family 15h (1)
- CPUID instruction (1)
- Cache-independent (1)
- Covert channel (1)
- Cross-core (1)
- Information hiding (1)
- Instruction scheduling (1)
- Multithreaded and multicore architecture (1)
- Usable Privacy (1)
- Usable Security (1)
The ongoing digitisation in everyday working life means that ever larger amounts of personal data of employees are processed by their employers. This development is particularly problematic with regard to employee data protection and the right to informational self-determination. We strive for the use of company Privacy Dashboards as a means to compensate for missing transparency and control. For conceptual design we use among other things the method of mental models. We present the methodology and first results of our research. We highlight the opportunities that such an approach offers for the user-centred development of Privacy Dashboards.
Bei der sechsten Ausgabe des wissenschaftlichen Workshops ”Usable Security und Privacy” auf der Mensch und Computer 2020 werden wie in den vergangenen Jahren aktuelle Forschungs- und Praxisbeiträge präsentiert und anschließend mit allen Teilnehmenden diskutiert. Drei Beiträge befassen sich dieses Jahr mit dem Thema Privatsphäre, einer mit dem Thema Sicherheit. Mit dem Workshop wird ein etabliertes Forum fortgeführt und weiterentwickelt, in dem sich Expert*innen aus unterschiedlichen Domänen, z. B. dem Usability- und Security-Engineering, transdisziplinär austauschen können.
Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making.