Recherche avancée

Médias (2)

Mot : - Tags -/kml

Autres articles (25)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (5682)

  • Revision 35243 : suivre [35095]

    16 février 2010, par brunobergot@… — Log

    suivre [35095]

  • What Is Data Misuse & How to Prevent It ? (With Examples)

    13 mai 2024, par Erin

    Your data is everywhere. Every time you sign up for an email list, log in to Facebook or download a free app onto your smartphone, your data is being taken.

    This can scare customers and users who fear their data will be misused.

    While data can be a powerful asset for your business, it’s important you manage it well, or you could be in over your head.

    In this guide, we break down what data misuse is, what the different types are, some examples of major data misuse and how you can prevent it so you can grow your brand sustainably.

    What is data misuse ?

    Data is a good thing.

    It helps analysts and marketers understand their customers better so they can serve them relevant information, products and services to improve their lives.

    But it can quickly become a bad thing for both the customers and business owners when it’s mishandled and misused.

    What is data misuse?

    Data misuse is when a business uses data outside of the agreed-upon terms. When companies collect data, they need to legally communicate how that data is being used. 

    Who or what determines when data is being misused ?

    Several bodies :

    • User agreements
    • Data privacy laws
    • Corporate policies
    • Industry regulations

    There are certain laws and regulations around how you can collect and use data. Failure to comply with these guidelines and rules can result in several consequences, including legal action.

    Keep reading to discover the different types of data misuse and how to prevent it.

    3 types of data misuse

    There are a few different types of data misuse.

    If you fail to understand them, you could face penalties, legal trouble and a poor brand reputation.

    3 types of data misuse.

    1. Commingling

    When you collect data, you need to ensure you’re using it for the right purpose. Commingling is when an organisation collects data from a specific audience for a specific reason but then uses the data for another purpose.

    One example of commingling is if a company shares sensitive customer data with another company. In many cases, sister companies will share data even if the terms of the data collection didn’t include that clause.

    Another example is if someone collects data for academic purposes like research but then uses the data later on for marketing purposes to drive business growth in a for-profit company.

    In either case, the company went wrong by not being clear on what the data would be used for. You must communicate with your audience exactly how the data will be used.

    2. Personal benefit

    The second common way data is misused in the workplace is through “personal benefit.” This is when someone with access to data abuses it for their own gain.

    The most common example of personal benefit data muse is when an employee misuses internal data.

    While this may sound like each instance of data misuse is caused by malicious intent, that’s not always the case. Data misuse can still exist even if an employee didn’t have any harmful intent behind their actions. 

    One of the most common examples is when an employee mistakenly moves data from a company device to personal devices for easier access.

    3. Ambiguity

    As mentioned above, when discussing commingling, a company must only use data how they say they will use it when they collect it.

    A company can misuse data when they’re unclear on how the data is used. Ambiguity is when a company fails to disclose how user data is being collected and used.

    This means communicating poorly on how the data will be used can be wrong and lead to misuse.

    One of the most common ways this happens is when a company doesn’t know how to use the data, so they can’t give a specific reason. However, this is still considered misuse, as companies need to disclose exactly how they will use the data they collect from their customers.

    Laws on data misuse you need to follow

    Data misuse can lead to poor reputations and penalties from big tech companies. For example, if you step outside social media platforms’ guidelines, you could be suspended, banned or shadowbanned.

    But what’s even more important is certain types of data misuse could mean you’re breaking laws worldwide. Here are some laws on data misuse you need to follow to avoid legal trouble :

    General Data Protection Regulation (GDPR)

    The GDPR, or General Data Protection Regulation, is a law within the European Union (EU) that went into effect in 2018.

    The GDPR was implemented to set a standard and improve data protection in Europe. It was also established to increase accountability and transparency for data breaches within businesses and organisations.

    The purpose of the GDPR is to protect residents within the European Union.

    The penalties for breaking GDPR laws are fines up to 20 million Euros or 4% of global revenues (whatever the higher amount is).

    The GDPR doesn’t just affect companies in Europe. You can break the GDPR’s laws regardless of where your organisation is located worldwide. As long as your company collects, processes or uses the personal data of any EU resident, you’re subject to the GDPR’s rules.

    If you want to track user data to grow your business, you need to ensure you’re following international data laws. Tools like Matomo—the world’s leading privacy-friendly web analytics solution—can help you achieve GDPR compliance and maintain it.

    With Matomo, you can confidently enhance your website’s performance, knowing that you’re adhering to data protection laws. 

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    California Consumer Privacy Act (CCPA)

    The California Consumer Privacy Act (CCPA) is another important data law companies worldwide must follow.

    Like GDPR, the CCPA is a data privacy law established to protect residents of a certain region — in this case, residents of California in the United States.

    The CCPA was implemented in 2020, and businesses worldwide can be penalised for breaking the regulations. For example, if you’re found violating the CCPA, you could be fined $7,500 for each intentional violation.

    If you have unintentional violations, you could still be fined, but at a lesser fee of $2,500.

    The Gramm-Leach-Bliley Act (GLBA)

    If your business is located within the United States, then you’re subject to a federal law implemented in 1999 called The Gramm-Leach-Bliley Act (GLB Act or GLBA).

    The GLBA is also known as the Financial Modernization Act of 1999. Its purpose is to control the way American financial institutions handle consumer data. 

    In the GLBA, there are three sections :

    1. The Financial Privacy Rule : regulates the collection and disclosure of private financial data.
    2. Safeguards Rule : Financial institutions must establish security programs to protect financial data.
    3. Pretexting Provisions : Prohibits accessing private data using false pretences.

    The GLBA also requires financial institutions in the U.S. to give their customers written privacy policy communications that explain their data-sharing practices.

    4 examples of data misuse in real life

    If you want to see what data misuse looks like in real life, look no further.

    Big tech is central to some of the biggest data misuses and scandals.

    4 examples of data misuse in real life.

    Here are a few examples of data misuse in real life you should take note of to avoid a similar scenario :

    1. Facebook election interference

    One of history’s most famous examples of data misuse is the Facebook and Cambridge Analytica scandal in 2018.

    During the 2018 U.S. midterm elections, Cambridge Analytica, a political consulting firm, acquired personal data from Facebook users that was said to have been collected for academic research.

    Instead, Cambridge Analytica used data from roughly 87 million Facebook users. 

    This is a prime example of commingling.

    The result ? Cambridge Analytica was left bankrupt and dissolved, and Facebook was fined $5 billion by the Federal Trade Commission (FTC).

    2. Uber “God View” tracking

    Another big tech company, Uber, was caught misusing data a decade ago. 

    Why ?

    Uber implemented a new feature for its employees in 2014 called “God View.”

    The tool enabled Uber employees to track riders using their app. The problem was that they were watching them without the users’ permission. “God View” lets Uber spy on their riders to see their movements and locations.

    The FTC ended up slapping them with a major lawsuit, and as part of their settlement agreement, Uber agreed to have an outside firm audit their privacy practices between 2014 and 2034.

    Uber "God View."

    3. Twitter targeted ads overstep

    In 2019, Twitter was found guilty of allowing advertisers to access its users’ personal data to improve advertisement targeting.

    Advertisers were given access to user email addresses and phone numbers without explicit permission from the users. The result was that Twitter ad buyers could use this contact information to cross-reference with Twitter’s data to serve ads to them.

    Twitter stated that the data leak was an internal error. 

    4. Google location tracking

    In 2020, Google was found guilty of not explicitly disclosing how it’s using its users’ personal data, which is an example of ambiguity.

    The result ?

    The French data protection authority fined Google $57 million.

    8 ways to prevent data misuse in your company

    Now that you know the dangers of data misuse and its associated penalties, it’s time to understand how you can prevent it in your company.

    How to prevent data misuse in your company.

    Here are eight ways you can prevent data misuse :

    1. Track data with an ethical web analytics solution

    You can’t get by in today’s business world without tracking data. The question is whether you’re tracking it safely or not.

    If you want to ensure you aren’t getting into legal trouble with data misuse, then you need to use an ethical web analytics solution like Matomo.

    With it, you can track and improve your website performance while remaining GDPR-compliant and respecting user privacy. Unlike other web analytics solutions that monetise your data and auction it off to advertisers, with Matomo, you own your data.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    2. Don’t share data with big tech

    As the data misuse examples above show, big tech companies often violate data privacy laws.

    And while most of these companies, like Google, appear to be convenient, they’re often inconvenient (and much worse), especially regarding data leaks, privacy breaches and the sale of your data to advertisers.

    Have you ever heard the phrase : “You are the product ?” When it comes to big tech, chances are if you’re getting it for free, you (and your data) are the products they’re selling.

    The best way to stop sharing data with big tech is to stop using platforms like Google. For more ideas on different Google product alternatives, check out this list of Google alternatives.

    3. Identity verification 

    Data misuse typically isn’t a company-wide ploy. Often, it’s the lack of security structure and systems within your company. 

    An important place to start is to ensure proper identity verification for anyone with access to your data.

    4. Access management

    After establishing identity verification, you should ensure you have proper access management set up. For example, you should only give specific access to specific roles in your company to prevent data misuse.

    5. Activity logs and monitoring

    One way to track data misuse or breaches is by setting up activity logs to ensure you can see who is accessing certain types of data and when they’re accessing it.

    You should ensure you have a team dedicated to continuously monitoring these logs to catch anything quickly.

    6. Behaviour alerts 

    While manually monitoring data is important, it’s also good to set up automatic alerts if there is unusual activity around your data centres. You should set up behaviour alerts and notifications in case threats or compromising events occur.

    7. Onboarding, training, education

    One way to ensure quality data management is to keep your employees up to speed on data security. You should ensure data security is a part of your employee onboarding. Also, you should have regular training and education to keep people informed on protecting company and customer data.

    8. Create data protocols and processes 

    To ensure long-term data security, you should establish data protocols and processes. 

    To protect your user data, set up rules and systems within your organisation that people can reference and follow continuously to prevent data misuse.

    Leverage data ethically with Matomo

    Data is everything in business.

    But it’s not something to be taken lightly. Mishandling user data can break customer trust, lead to penalties from organisations and even create legal trouble and massive fines.

    You should only use privacy-first tools to ensure you’re handling data responsibly.

    Matomo is a privacy-friendly web analytics tool that collects, stores and tracks data across your website without breaking privacy laws.

    With over 1 million websites using Matomo, you can track and improve website performance with :

    • Accurate data (no data sampling)
    • Privacy-friendly and compliant with privacy regulations like GDPR, CCPA and more
    • Advanced features like heatmaps, session recordings, A/B testing and more

    Try Matomo free for 21-days. No credit card required.

  • How to set crontab in order to run multiple python and a shell scripts ?

    5 janvier 2021, par Alexander Mitsou

    I need to start three python3 scripts and a shell script using crontab. These scripts should run at the same time without any delay. Each script runs exactly for one minute. For instance I have scheduled crontab to run these scripts every 5 minutes.

    


    My problem is that, if I attempt to run each script individually from terminal it executes with no further errors, but using crontab nothing happens.

    


    DISCLAIMER : If I set up the Python3 scripts individually in crontab, they work fine !

    


    Here's my crontab set up :

    


    */5 * * * * cd /home/user/Desktop/ && /usr/bin/python3 script1.py >> report1.log

*/5 * * * * cd /home/user/Desktop/ && /usr/bin/python3 script2.py >> report2.log

*/5 * * * * cd /home/user/Desktop/ && /usr/bin/python3 script3.py >> report3.log

*/5 * * * * cd /home/user/Desktop/ && /usr/bin/sh script4.sh >> report4.log 


    


    In addition I need to mention that the shell script contains this command (FFMPEG) :

    


    #!/bin/bash

parent_dir=`dirname \`pwd\`` 
folder_name="/Data/Webcam" 
new_path=$parent_dir$folder_name  


if [ -d "$new_path" ]; then
    echo "video_audio folder exists..."
else
    echo "Creating video_audio folder in the current directory..."
    mkdir -p -- "$new_path"
    sudo chmod 777 "$new_path"
    echo "Folder created"
    echo
fi

now=$(date +%F) 
now="$( echo -e "$now" | tr  '-' '_'  )"
sub_dir=$new_path'/'$now 

if [ -d "$sub_dir" ]; then
    echo "Date Sub-directory exists..."
    echo
else
    echo "Error: ${sub_dir} not found..."
    echo "Creating date sub-directory..."
    mkdir -p -- "$sub_dir"
    sudo chmod 777 "$sub_dir"
    echo "Date sub-directory created..."
    echo
fi

fname=$(date +%H_%M_%S)".avi"
video_dir=$sub_dir'/'$fname
ffmpeg -f pulse -ac 1 -i default -f v4l2 -i  /dev/video0 -vcodec libx264 -t 00:01:00 $video_dir 


    


    The log file of that script contain the following :

    


    video_audio folder exists...
Date Sub-directory exists...

Package ffmpeg is already installed...
Package v4l-utils is already installed...

Package: ffmpeg
Status: install ok installed
Priority: optional
Section: video
Installed-Size: 2010
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Architecture: amd64
Multi-Arch: foreign
Version: 7:4.2.4-1ubuntu0.1
Replaces: libav-tools (<< 6:12~~), qt-faststart (<< 7:2.7.1-3~)
Depends: libavcodec58 (= 7:4.2.4-1ubuntu0.1), libavdevice58 (= 7:4.2.4-1ubuntu0.1), libavfilter7 (= 7:4.2.4-1ubuntu0.1), libavformat58 (= 7:4.2.4-1ubuntu0.1), libavresample4 (= 7:4.2.4-1ubuntu0.1), libavutil56 (= 7:4.2.4-1ubuntu0.1), libc6 (>= 2.29), libpostproc55 (= 7:4.2.4-1ubuntu0.1), libsdl2-2.0-0 (>= 2.0.10), libswresample3 (= 7:4.2.4-1ubuntu0.1), libswscale5 (= 7:4.2.4-1ubuntu0.1)
Suggests: ffmpeg-doc
Breaks: libav-tools (<< 6:12~~), qt-faststart (<< 7:2.7.1-3~), winff (<< 1.5.5-5~)
Description: Tools for transcoding, streaming and playing of multimedia files
 FFmpeg is the leading multimedia framework, able to decode, encode, transcode,
 mux, demux, stream, filter and play pretty much anything that humans and
 machines have created. It supports the most obscure ancient formats up to the
 cutting edge.
 .
 This package contains:
  * ffmpeg: a command line tool to convert multimedia files between formats
  * ffplay: a simple media player based on SDL and the FFmpeg libraries
  * ffprobe: a simple multimedia stream analyzer
  * qt-faststart: a utility to rearrange Quicktime files
Homepage: https://ffmpeg.org/
Original-Maintainer: Debian Multimedia Maintainers <debian-multimedia@lists.debian.org>
Package: v4l-utils
Status: install ok installed
Priority: optional
Section: utils
Installed-Size: 2104
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Architecture: amd64
Version: 1.18.0-2build1
Replaces: ivtv-utils (<< 1.4.1-2), media-ctl
Depends: libv4l-0 (= 1.18.0-2build1), libv4l2rds0 (= 1.18.0-2build1), libc6 (>= 2.17), libgcc-s1 (>= 3.0), libstdc++6 (>= 5.2), libudev1 (>= 183)
Breaks: ivtv-utils (<< 1.4.1-2), media-ctl
Description: Collection of command line video4linux utilities
 v4l-utils contains the following video4linux command line utilities:
 .
  decode_tm6000: decodes tm6000 proprietary format streams
  rds-ctl: tool to receive and decode Radio Data System (RDS) streams
  v4l2-compliance: tool to test v4l2 API compliance of drivers
  v4l2-ctl, cx18-ctl, ivtv-ctl: tools to control v4l2 controls from the cmdline
  v4l2-dbg: tool to directly get and set registers of v4l2 devices
  v4l2-sysfs-path: sysfs helper tool
Original-Maintainer: Gregor Jasny <gjasny@googlemail.com>
Homepage: https://linuxtv.org/downloads/v4l-utils/


    


    Due to the reason that the python files are of the same structure I'm uploading a sample file here :

    


    # -*- coding: utf-8 -*-
from threading import Timer
from pynput.mouse import Listener
import logging
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(
    os.path.realpath(__file__)), "../"))

from Functions import utils as ut

if __name__=='__main__':

    ut.initialize_dirs()
    rec_file = ''.join(('mouse_',ut.get_date(),'.txt'))
    raw_data = ut.get_name('Mouse')
    rec_file = os.path.join(raw_data,rec_file)
    logging.basicConfig(filename=rec_file,level=logging.DEBUG,format="%(asctime)s    %(message)s")

    try:
        with Listener(on_move=ut.on_move, on_click=ut.on_click,on_scroll=ut.on_scroll) as listener:
            Timer(60, listener.stop).start()
            listener.join()
    except KeyboardInterrupt as err:
        print(err)
        sys.exit(0)

    print('Exiting logger...')



    


    I'm also uploading the functions that I use :

    


    # -*- coding: utf-8 -*-&#xA;from serial import Serial&#xA;from datetime import datetime, timedelta&#xA;import pandas as pd&#xA;import collections&#xA;import logging&#xA;import shutil&#xA;import serial&#xA;import time&#xA;import sys&#xA;import os&#xA;&#xA;click_held = False&#xA;button = None&#xA;&#xA;&#xA;def on_move(x,y):&#xA;    """The callback to call when mouse move events occur&#xA;&#xA;    Args:&#xA;        x (float): The new pointer position&#xA;        y (float): The new pointer poisition&#xA;    """&#xA;    if click_held:&#xA;        logging.info("MV    {0:>8}  {1:>8}  {2:>8}:".format(x,y,str(None)))&#xA;    else:&#xA;        logging.info("MV    {0:>8}  {1:>8}  {2:>8}:".format(x,y,str(None)))&#xA;&#xA;&#xA;def on_click(x,y,button,pressed):&#xA;    """The callback to call when a mouse button is clicked&#xA;&#xA;    Args:&#xA;        x (float): Mouse coordinates on screen&#xA;        y (float): Mouse coordinates on screen&#xA;        button (str): one of the Button values&#xA;        pressed (bool): Pressed is whether the button was pressed&#xA;    """&#xA;    global click_held&#xA;    if pressed:&#xA;        click_held = True&#xA;        logging.info("CLK    {0:>7}    {1:>6}    {2:>13}".format(x,y,button))&#xA;    else:&#xA;        click_held = False&#xA;        logging.info("RLS    {0:>7}    {1:>6}    {2:>13}".format(x,y,button))&#xA;&#xA;&#xA;def on_scroll(x,y,dx,dy):&#xA;    """The callback to call when mouse scroll events occur&#xA;&#xA;    Args:&#xA;        x (float): The new pointer position on screen&#xA;        y (float): The new pointer position on screen&#xA;        dx (int): The horizontal scroll. The units of scrolling is undefined&#xA;        dy (int): The vertical scroll. The units of scrolling is undefined&#xA;    """&#xA;    if dy == -1:&#xA;        logging.info("SCRD    {0:>6}    {1:>6}    {2:>6}".format(x,y,str(None)))&#xA;    elif dy == 1:&#xA;        logging.info("SCRU    {0:>6}    {1:>6}    {2:>6}".format(x,y,str(None)))&#xA;    else:&#xA;        pass&#xA;&#xA;&#xA;def on_press_keys(key):&#xA;    """The callback to call when a button is pressed.&#xA;&#xA;    Args:&#xA;        key (str): A KeyCode,a Key or None if the key is unknown&#xA;    """&#xA;    subkeys = [&#xA;    &#x27;Key.alt&#x27;,&#x27;Key.alt_gr&#x27;,&#x27;Key.alt_r&#x27;,&#x27;Key.backspace&#x27;,&#xA;    &#x27;Key.space&#x27;,&#x27;Key.ctrl&#x27;,&#x27;Key.ctrl_r&#x27;,&#x27;Key.down&#x27;,&#xA;    &#x27;Key.up&#x27;,&#x27;Key.left&#x27;,&#x27;Key.right&#x27;,&#x27;Key.page_down&#x27;,&#xA;    &#x27;Key.page_up&#x27;,&#x27;Key.enter&#x27;,&#x27;Key.shift&#x27;,&#x27;Key.shift_r&#x27;&#xA;    ]&#xA;&#xA;    key = str(key).strip(&#x27;\&#x27;&#x27;)&#xA;    if(key in subkeys):&#xA;        #print(key)&#xA;        logging.info(key)&#xA;    else:&#xA;        pass&#xA;&#xA;&#xA;def record_chair(output_file):&#xA;    """Read the data stream coming from the serial monitor&#xA;       in order to get the sensor readings&#xA;&#xA;    Args:&#xA;        output_file (str): The file name, where the data stream will be stored&#xA;    """&#xA;    serial_port = "/dev/ttyACM0"&#xA;    baud_rate = 9600&#xA;    ser = serial.Serial(serial_port,baud_rate)&#xA;    logging.basicConfig(filename=output_file,level=logging.DEBUG,format="%(asctime)s    %(message)s")&#xA;    flag = False&#xA;    start = time.time()&#xA;    while time.time() - start &lt; 60.0:&#xA;        try:&#xA;            serial_data = str(ser.readline().decode().strip(&#x27;\r\n&#x27;))&#xA;            time.sleep(0.2)&#xA;            tmp = serial_data.split(&#x27;  &#x27;)[0] #Getting Sensor Id&#xA;            if(tmp == &#x27;A0&#x27;):&#xA;                flag = True&#xA;            if (flag and tmp != &#x27;A4&#x27;):&#xA;                #print(serial_data)&#xA;                logging.info(serial_data)&#xA;            if(flag and tmp == &#x27;A4&#x27;):&#xA;                flag = False&#xA;                #print(serial_data)&#xA;                logging.info(serial_data)&#xA;        except (UnicodeDecodeError, KeyboardInterrupt) as err:&#xA;            print(err)&#xA;            print(err.args)&#xA;            sys.exit(0)&#xA;&#xA;&#xA;def initialize_dirs():&#xA;    """Create the appropriate directories in order to save&#xA;       and process the collected data&#xA;    """&#xA;    current_path = os.path.abspath(os.getcwd())&#xA;    os.chdir(&#x27;..&#x27;)&#xA;    current_path = (os.path.abspath(os.curdir)) #/Multodal_User_Monitoring&#xA;    current_path = os.path.join(current_path,&#x27;Data&#x27;)&#xA;    create_subdirs([current_path])&#xA;&#xA;    #Create mouse log folder&#xA;    mouse = os.path.join(current_path,&#x27;Mouse&#x27;)&#xA;    create_subdirs([mouse])&#xA;    #Create mouse subfolders&#xA;    names = concat_names(mouse)&#xA;    create_subdirs(names)&#xA;&#xA;    #Create keyboard log  folder&#xA;    keyboard = os.path.join(current_path,&#x27;Keyboard&#x27;)&#xA;    create_subdirs([keyboard])&#xA;    #Create keyboard subfolders&#xA;    names = concat_names(keyboard)&#xA;    create_subdirs(names)&#xA;&#xA;    #Create the chair log folder&#xA;    chair = os.path.join(current_path,&#x27;Chair&#x27;)&#xA;    create_subdirs([chair])&#xA;    #Create chair subfolders&#xA;    names = concat_names(chair)&#xA;    create_subdirs(names)&#xA;&#xA;    #Create webcam log folder&#xA;    webcam = os.path.join(current_path,&#x27;Webcam&#x27;)&#xA;    create_subdirs([webcam])&#xA;&#xA;def concat_names(dir) -> str:&#xA;    """Concatenate the given folder names&#xA;       with the appropriate path&#xA;&#xA;    Args:&#xA;        dir (str): The directory to create the subfolders&#xA;&#xA;    Returns:&#xA;        list: The new absolute paths&#xA;    """&#xA;    raw_data = os.path.join(dir,&#x27;Raw&#x27;)&#xA;    edited_data = os.path.join(dir,&#x27;Edited_logs&#x27;)&#xA;    csv_data = os.path.join(dir,&#x27;CSV&#x27;)&#xA;    features = os.path.join(dir,&#x27;Features&#x27;)&#xA;    dirs = [raw_data,edited_data,csv_data,features]&#xA;    return dirs&#xA;&#xA;&#xA;def create_subdirs(paths):&#xA;    """Create sub directories given some absolute paths&#xA;&#xA;    Args:&#xA;        paths (list): A list containing the paths to be created&#xA;    """&#xA;    for index,path in enumerate(paths):&#xA;        if(os.path.isdir(paths[index])):&#xA;            pass&#xA;        else:&#xA;            os.mkdir(paths[index])&#xA;&#xA;&#xA;def round_down(num,divisor) -> int:&#xA;    """Round the number of lines contained into the recording file,&#xA;       down to the nearest multiple of the given divisor&#xA;&#xA;    Args:&#xA;        num (int): The number of lines contained into the given log file&#xA;        divisor (int): The divisor in order to get tuples of divisor&#xA;&#xA;    Returns:&#xA;        int: The nearest multiple of five&#xA;    """&#xA;    return num-(num%divisor)&#xA;&#xA;&#xA;def get_date() -> str:&#xA;    """Get the current date in order to properly name&#xA;       the recored log files&#xA;    Returns:&#xA;        str: The current date in: YY_MM_DD format&#xA;    """&#xA;    return datetime.now().strftime(&#x27;%Y_%m_%d&#x27;)&#xA;&#xA;&#xA;def get_name(modality) -> str:&#xA;    """Save the recorded log into /Data//Raw&#xA;&#xA;    Args:&#xA;        modality (str): The log data source&#xA;&#xA;    Returns:&#xA;        str: The absolute path where each recording is saved&#xA;    """&#xA;    current_path = os.path.abspath(os.getcwd())&#xA;    current_path = os.path.join(current_path,&#x27;Data&#x27;)&#xA;&#xA;    if modality == &#x27;Chair&#x27;:&#xA;        chair_path = os.path.join(current_path,modality,&#x27;Raw&#x27;)&#xA;        return chair_path&#xA;&#xA;    elif modality == &#x27;Mouse&#x27;:&#xA;        mouse_path = os.path.join(current_path,modality,&#x27;Raw&#x27;)&#xA;        return mouse_path&#xA;&#xA;    elif modality == &#x27;Keyboard&#x27;:&#xA;        keyboard_path = os.path.join(current_path,modality,&#x27;Raw&#x27;)&#xA;        return keyboard_path&#xA;&#xA;&#xA;def crawl_dir(target,folder) -> str:&#xA;    """Enumerate all the given files in a directory&#xA;       based on the given file extension&#xA;&#xA;    Args:&#xA;        target (str): The file to search for&#xA;        folder (str): The folder to search&#xA;&#xA;    Returns:&#xA;        [type]: A list containing the file names&#xA;    """&#xA;    current_path = os.path.abspath(os.getcwd())&#xA;    path = os.path.join(current_path,folder)&#xA;    file_names =[]&#xA;    for f in os.listdir(path):&#xA;        if(f.endswith(target)):&#xA;            fname=os.path.join(path,f)&#xA;            file_names.append(fname)&#xA;    return file_names&#xA;&#xA;&#xA;def convert_keys2_csv(input_file,output_file):&#xA;    """Convert the data stream file(keylogger recording) from .txt to .csv format&#xA;&#xA;    Args:&#xA;        input_file (str): The data stream file in .txt format&#xA;        output_file (str): The csv extension file name&#xA;    """&#xA;    df = pd.read_fwf(input_file)&#xA;    col_names = [&#x27;Date&#x27;,&#x27;Time&#x27;,&#x27;Key&#x27;]&#xA;    df.to_csv(output_file,header=col_names,encoding=&#x27;utf-8&#x27;,index=False)&#xA;&#xA;&#xA;def convert_mouse2_csv(input_file,output_file):&#xA;    """Convert the data stream file(mouselogger recording) from .txt to .csv format&#xA;&#xA;    Args:&#xA;        input_file (str): The data stream file in .txt format&#xA;        output_file (str): The csv extension file name&#xA;    """&#xA;    df = pd.read_fwf(input_file)&#xA;    col_names = [&#x27;Date&#x27;,&#x27;Time&#x27;,&#x27;Action&#x27;,&#x27;PosX&#x27;,&#x27;PosY&#x27;,&#x27;Button&#x27;]&#xA;    df.to_csv(output_file,header=col_names,encoding=&#x27;utf-8&#x27;,index=False)&#xA;&#xA;&#xA;def convert_chair_2_csv(input_file,output_file):&#xA;    """Convert the data stream file(chair recording)&#xA;       from .txt to .csv format&#xA;&#xA;    Args:&#xA;        input_file (str): The data stream file in .txt format&#xA;        output_file (str): The csv extension file name&#xA;    """&#xA;    if(os.path.isfile(input_file)):&#xA;        pass&#xA;    else:&#xA;        print(&#x27;Invalid working directory...&#x27;)&#xA;        print(&#x27;Aborting...&#x27;)&#xA;        sys.exit(0)&#xA;&#xA;    tmp0,tmp1,tmp2,tmp3,tmp4 = 0,1,2,3,4&#xA;&#xA;    line_number = 0&#xA;    for line in open(input_file).readlines():&#xA;        line_number &#x2B;= 1&#xA;&#xA;    rounded_line = round_down(line_number,5)&#xA;    d = collections.defaultdict(list)&#xA;&#xA;    with open(input_file,&#x27;r&#x27;) as f1:&#xA;        lines = f1.readlines()&#xA;        for i in range(rounded_line // 5):&#xA;            #Sensor:Analog input 0 values&#xA;            Sid0 = lines[i&#x2B;tmp0]&#xA;            temp = Sid0.split()&#xA;            d[&#x27;Sid0&#x27;].append([temp[0],temp[1],temp[2],temp[3]])&#xA;            #Sensor:Analog input 1 values&#xA;            Sid1 = lines[i&#x2B;tmp1]&#xA;            temp = Sid1.split()&#xA;            d[&#x27;Sid1&#x27;].append([temp[0],temp[1],temp[2],temp[3]])&#xA;            #Sensor:Analog input 2 values&#xA;            Sid2 = lines[i&#x2B;tmp2]&#xA;            temp = Sid2.split()&#xA;            d[&#x27;Sid2&#x27;].append([temp[0],temp[1],temp[2],temp[3]])&#xA;            #Sensor:Analog input 3 values&#xA;            Sid3 = lines[i&#x2B;tmp3]&#xA;            temp = Sid3.split()&#xA;            d[&#x27;Sid3&#x27;].append([temp[0],temp[1],temp[2],temp[3]])&#xA;            #Sensor:Analog input 4 values&#xA;            Sid4 = lines[i&#x2B;tmp4]&#xA;            temp = Sid4.split()&#xA;            d[&#x27;Sid4&#x27;].append([temp[0],temp[1],temp[2],temp[3]])&#xA;&#xA;            tmp0 &#x2B;= 4&#xA;            tmp1 &#x2B;= 4&#xA;            tmp2 &#x2B;= 4&#xA;            tmp3 &#x2B;= 4&#xA;            tmp4 &#x2B;= 4&#xA;&#xA;    l = []&#xA;    for i in range(rounded_line // 5):&#xA;        date = d[&#x27;Sid0&#x27;][i][0]&#xA;        time = d[&#x27;Sid0&#x27;][i][1]&#xA;        A0_val = d[&#x27;Sid0&#x27;][i][3]&#xA;        A1_val = d[&#x27;Sid1&#x27;][i][3]&#xA;        A2_val = d[&#x27;Sid2&#x27;][i][3]&#xA;        A3_val = d[&#x27;Sid3&#x27;][i][3]&#xA;        A4_val = d[&#x27;Sid4&#x27;][i][3]&#xA;        l.append([date,time,A0_val,A1_val,A2_val,A3_val,A4_val])&#xA;&#xA;    sensor_readings_df = pd.DataFrame.from_records(l)&#xA;    sensor_readings_df.columns = [&#x27;Date&#x27;,&#x27;Time&#x27;,&#x27;A0&#x27;,&#x27;A1&#x27;,&#x27;A2&#x27;,&#x27;A3&#x27;,&#x27;A4&#x27;]&#xA;    sensor_readings_df.to_csv(output_file, encoding=&#x27;utf-8&#x27;, index=False)&#xA;    del l&#xA;&#xA;&#xA;def parse_raw_data(modality):&#xA;    """Convert each modality&#x27;s raw data into csv format and move&#xA;       the edited raw data into the appropriate Edited_logs folder&#xA;&#xA;    Args:&#xA;        modality (str): The data source&#xA;    """&#xA;    #Change directories&#xA;    current_path = os.path.abspath(os.getcwd()) #/Functions&#xA;    os.chdir(&#x27;..&#x27;)&#xA;    current_path = (os.path.abspath(os.curdir)) #/Multimodal_User_Monitoring&#xA;    os.chdir(&#x27;./Data&#x27;)#/Multimodal_User_Monitoring/Data&#xA;    current_path = (os.path.abspath(os.curdir)) #/Multimodal_User_Monitoring/Data&#xA;    current_path = os.path.join(current_path,modality) #example: /Multimodal_User_Monitoring/Data/<modality>&#xA;    raw_data_path = os.path.join(current_path,&#x27;Raw&#x27;)&#xA;    csv_data_path = os.path.join(current_path,&#x27;CSV&#x27;)&#xA;    edited_logs_path = os.path.join(current_path,&#x27;Edited_logs&#x27;)&#xA;&#xA;    txt_names = crawl_dir(&#x27;.txt&#x27;,raw_data_path)&#xA;    csv_names = []&#xA;    for elem in txt_names:&#xA;        name = elem.split(&#x27;/&#x27;)[-1].split(&#x27;.&#x27;)[0]&#xA;        csv_name = name&#x2B;&#x27;.csv&#x27;&#xA;        tmp = os.path.join(csv_data_path,csv_name)&#xA;        csv_names.append(tmp)&#xA;&#xA;    if modality == &#x27;Mouse&#x27;:&#xA;        if len(txt_names) == len(csv_names):&#xA;            for i, elem in enumerate(txt_names):&#xA;            #for i in range(len(txt_names)):&#xA;                convert_mouse2_csv(txt_names[i],csv_names[i])&#xA;                shutil.move(txt_names[i],edited_logs_path)&#xA;&#xA;    elif modality == &#x27;Keyboard&#x27;:&#xA;        if len(txt_names) == len(csv_names):&#xA;            for i, elem in enumerate(txt_names):&#xA;            #for i in range(len(txt_names)):&#xA;                convert_keys2_csv(txt_names[i],csv_names[i])&#xA;                shutil.move(txt_names[i],edited_logs_path)&#xA;&#xA;    elif modality == &#x27;Chair&#x27;:&#xA;        if len(txt_names) == len(csv_names):&#xA;            for i, elem in enumerate(txt_names):&#xA;            #for i in range(len(txt_names)):&#xA;                convert_chair_2_csv(txt_names[i],csv_names[i])&#xA;                shutil.move(txt_names[i],edited_logs_path)&#xA;&#xA;</modality>

    &#xA;

    I need to mention that the logs of the python3 scripts are empty

    &#xA;