Recherche avancée

Médias (91)

Autres articles (67)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (7332)

  • GDPR Compliance and Personal Data : The Ultimate Guide

    22 septembre 2023, par Erin — GDPR

    According to the International Data Corporation (IDC), the world generated 109 zettabytes of data in 2022 alone, and that number is on track to nearly triple to 291 zettabytes in 2027. For scale, that’s one trillion gigs or one followed by 21 zeros in bytes.

    A major portion of that data is generated online, and the conditions for securing that digital data can have major real-world consequences. For example, online identifiers that fall into the wrong hands can be used nefariously for cybercrime, identity theft or unwanted targeting. Users also want control over how their actions are tracked online and transparency into how their information is used.

    Therefore, regional and international regulations are necessary to set the terms for respecting users’ privacy and control over personal information. Perhaps the most widely known of these laws is the European Union’s General Data Protection Regulation (GDPR).

    What is personal data under GDPR ?

    Under the General Data Protection Regulation (GDPR), “personal data” refers to information linked to an identifiable natural person. An “identifiable natural person” is someone directly or indirectly recognisable via individually specific descriptors such as physical, genetic, economic, cultural, employment and social details.

    It’s important to note that under GDPR, the definition of personal data is very broad, and it encompasses both information that is commonly considered personal (e.g., names and addresses) and more technical or specialised data (e.g., IP addresses or device IDs) that can be used to identify individuals indirectly.

    Organisations that handle personal data must adhere to strict rules and principles regarding the processing and protection of this data to ensure individuals’ privacy rights are respected and upheld.

    Personal data can include, but is not limited to, the following :

    1. Basic Identity Information : This includes a person’s name, government-issued ID number, social address, phone number, email address or other similar identifiers.
    2. Biographical Information : Details such as date of birth, place of birth, nationality and gender.
    3. Contact Information : Information that allows communication with the individual, such as phone numbers, email addresses or mailing addresses.
    4. Financial Information : Data related to a person’s finances, including credit card numbers, bank account numbers, income records or financial transactions.
    5. Health and Medical Information : Information about a person’s health, medical history or healthcare treatments.
    6. Location Data : Data that can pinpoint a person’s geographical location, such as GPS coordinates or information derived from mobile devices.
    7. Online Identifiers : Information like IP addresses, cookies or other online tracking mechanisms that can be used to identify or track individuals online.
    8. Biometric Data : Unique physical or behavioural characteristics used for identification, such as fingerprints, facial recognition data or voiceprints.

    Sensitive Data

    Sensitive data is a special category of personal data prohibited from processing unless specific conditions are met, including users giving explicit consent. The data must also be necessary to fulfil one or more of a limited set of allowed purposes, such as reasons related to employment, social protections or legal claims.

    Sensitive information includes details about a person’s racial or ethnic origin, sexual orientation, political opinions, religion, trade union membership, biometric data or genetic data.

    What are the 7 main principles of GDPR ?

    The 7 principles of GDPR guide companies in how to properly handle personal data gathered from their users.

    A list of the main principles to follow for GDPR personal data handling

    The seven principles of GDPR are :

    1. Lawfulness, fairness and transparency

    Lawfulness means having legal grounds for data processing, such as consent, legitimate interests, contract and legal obligation. If you can achieve your objective without processing personal data, the basis is no longer lawful.

    Fairness means you’re processing data reasonably and in line with users’ best interests, and they wouldn’t be shocked if they find out what you’re using it for.

    Transparency means being open regarding when you’re processing user data, what you’re using it for and who you’re collecting it from.

    To get started with this, use our guide on creating a GDPR-compliant privacy policy.

    2. Purpose limitation

    You should only process user data for the original purposes you communicated to users when requesting their explicit consent. If you aim to undertake a new purpose, it must be compatible with the original stated purpose. Otherwise, you’ll need to ask for consent again.

    3. Data minimisation

    You should only collect as much data as you need to accomplish compliant objectives and nothing more, especially not other personally identifiable information (PII).

    Matomo provides several features for extensive data minimisation, including the ability to anonymize IP addresses.

    Data minimisation is well-liked by users. Around 70% of people have taken active steps towards protecting their identity online, so they’ll likely appreciate any principles that help them in this effort.

    4. Accuracy

    The user data you process should be accurate and up-to-date where necessary. You should have reasonable systems to catch inaccurate data and correct or delete it. If there are mistakes that you need to store, then you need to label them clearly as mistakes to keep them from being processed as accurate.

    5. Storage limitation

    This principle requires you to eliminate data you’re no longer using for the original purposes. You must implement time limits, after which you’ll delete or anonymize any user data on record. Matomo allows you to configure your system such that logs are automatically deleted after some time.

    6. Integrity and confidentiality

    This requires that data processors have security measures in place to protect data from threats such as hackers, loss and damage. As an open-source web analytics solution, Matomo enables you to verify its security first-hand.

    7. Accountability

    Accountability means you’re responsible for what you do with the data you collect. It’s your duty to maintain compliance and document everything for audits. Matomo tracks a lot of the data you’d need for this, including activity, task and application logs.

    Who does GDPR apply to ?

    The GDPR applies to any company that processes the personal data of EU citizens and residents (regardless of the location of the company). 

    If this is the first time you’ve heard about this, don’t worry ! Matomo provides tools that allow you to determine exactly what kinds of data you’re collecting and how they must be handled for full compliance. 

    Best practices for processing personal data under GDPR

    Companies subject to the GDPR need to be aware of several key principles and best practices to ensure they process personal data in a lawful and responsible manner.

    Here are some essential practices to implement :

    1. Lawful basis for processing : Organisations must have a lawful basis for processing personal data. Common lawful bases include the necessity of processing for compliance with a legal obligation, the performance of a contract, the protection of vital interests and tasks carried out in the public interest. Your organisation’s legitimate interests for processing must not override the individual’s legal rights. 
    2. Data minimisation : Collect and process only the personal data that is necessary for the specific purpose for which it was collected. Matomo’s anonymisation capabilities help you avoid collecting excessive or irrelevant data.
    3. Transparency : Provide clear and concise information to individuals about how their data will be processed. Privacy statements should be clear and accessible to users to allow them to easily understand how their data is used.
    4. Consent : If you are relying on consent as a lawful basis, make sure you design your privacy statements and consent forms to be usable. This lets you ensure that consent is freely given, specific, informed and unambiguous. Also, individuals must be able to withdraw their consent at any time.
    5. Data subject rights : You must have mechanisms in place to uphold the data subject’s individual rights, such as the rights to access, erase, rectify errors and restrict processing. Establish internal processes for handling such requests.
    6. Data protection impact assessments (DPIAs) : Conduct DPIAs for high-risk processing activities, especially when introducing new technologies or processing sensitive data.
    7. Security measures : You must implement appropriate technical security measures to maintain the safety of personal data. This can include ‌security tools such as encryption, firewalls and limited access controls, as well as organisational practices like regular security assessments. 
    8. Data breach response : Develop and maintain a data breach response plan. Notify relevant authorities and affected individuals of data breaches within the required timeframe.
    9. International data transfers : If transferring personal data outside the EU, ensure that appropriate safeguards are in place and consider GDPR provisions. These provisions allow data transfers from the EU to non-EU countries in three main ways :
      1. When the destination country has been deemed by the European Commission to have adequate data protection, making it similar to transferring data within the EU.
      2. Through the use of safeguards like binding corporate rules, approved contractual clauses or adherence to codes of conduct.
      3. In specific situations when none of the above apply, such as when an individual explicitly consents to the transfer after being informed of the associated risks.
    10. Data protection officers (DPOs) : Appoint a data protection officer if required by GDPR. DPOs are responsible for overseeing data protection compliance within the organisation.
    11. Privacy by design and default : Integrate data protection into the design of systems and processes. Default settings should prioritise user privacy, as is the case with something like Matomo’s first-party cookies.
    12. Documentation : Maintain records of data processing activities, including data protection policies, procedures and agreements. Matomo logs and backs up web server access, activity and more, providing a solid audit trail.
    13. Employee training : Employees who handle personal data must be properly trained to uphold data protection principles and GDPR compliance best practices. 
    14. Third-party contracts : If sharing data with third parties, have data processing agreements in place that outline the responsibilities and obligations of each party regarding data protection.
    15. Regular audits and assessments : Conduct periodic audits and assessments of data processing activities to ensure ongoing compliance. As mentioned previously, Matomo tracks and saves several key statistics and metrics that you’d need for a successful audit.
    16. Accountability : Demonstrate accountability by documenting and regularly reviewing compliance efforts. Be prepared to provide evidence of compliance to data protection authorities.
    17. Data protection impact on data analytics and marketing : Understand how GDPR impacts data analytics and marketing activities, including obtaining valid consent for marketing communications.

    Organisations should be on the lookout for GDPR updates, as the regulations may evolve over time. When in doubt, consult legal and privacy professionals to ensure compliance, as non-compliance could potentially result in significant fines, damage to reputation and legal consequences.

    What constitutes a GDPR breach ?

    Security incidents that compromise the confidentiality, integrity and/or availability of personal data are considered a breach under GDPR. This means a breach is not limited to leaks ; if you accidentally lose or delete personal data, its availability is compromised, which is technically considered a breach.

    What are the penalty fines for GDPR non-compliance ?

    The penalty fines for GDPR non-compliance are up to €20 million or up to 4% of the company’s revenue from the previous fiscal year, whichever is higher. This makes it so that small companies can also get fined, no matter how low-profile the breach is.

    In 2022, for instance, a company found to have mishandled user data was fined €2,000, and the webmaster responsible was personally fined €150.

    Is Matomo GDPR compliant ?

    Matomo is fully GDPR compliant and can ensure you achieve compliance, too. Here’s how :

    • Data anonymization and IP anonymization
    • GDPR Manager that helps you identify gaps in your compliance and address them effectively
    • Users can opt-out of all tracking
    • First-party cookies by default
    • Users can view the data collected
    • Capabilities to delete visitor data when requested
    • You own your data and it is not used for any other purposes (like advertising)
    • Visitor logs and profiles can be disabled
    • Data is stored in the EU (Matomo Cloud) or in any country of your choice (Matomo On-Premise)

    Is there a GDPR in the US ?

    There is no GDPR-equivalent law that covers the US as a whole. That said, US-based companies processing data from persons in the EU still need to adhere to GDPR principles.

    While there isn’t a federal data protection law, several states have enacted their own. One notable example is the California Consumer Privacy Act (CCPA), which Matomo is fully compliant with.

    Ready for GDPR-compliant analytics ?

    The GDPR lays out a set of regulations and penalties that govern the collection and processing of personal data from EU citizens and residents. A breach under GDPR attracts a fine of either up to €20 million or 4% of the company’s revenue, and the penalty applies to companies of all sizes.

    Matomo is fully GDPR compliant and provides several features and advanced privacy settings to ensure you ‌are as well, without sacrificing the resources you need for effective analytics. If you’re ready to get started, sign up for a 21-day free trial of Matomo — no credit card required.

    Disclaimer
    We are not lawyers and don’t claim to be. The information provided here is to help give an introduction to GDPR. We encourage every business and website to take data privacy seriously and discuss these issues with your lawyer if you have any concerns.

  • Adventures In NAS

    1er janvier, par Multimedia Mike — General

    In my post last year about my out-of-control single-board computer (SBC) collection which included my meager network attached storage (NAS) solution, I noted that :

    I find that a lot of my fellow nerds massively overengineer their homelab NAS setups. I’ll explore this in a future post. For my part, people tend to find my homelab NAS solution slightly underengineered.

    So here I am, exploring this is a future post. I’ve been in the home NAS game a long time, but have never had very elaborate solutions for such. For my part, I tend to take an obsessively reductionist view of what constitutes a NAS : Any small computer with a pool of storage and a network connection, running the Linux operating system and the Samba file sharing service.


    Simple hard drive and ethernet cable

    Many home users prefer to buy turnkey boxes, usually that allow you to install hard drives yourself, and then configure the box and its services with a friendly UI. My fellow weird computer nerds often buy cast-off enterprise hardware and set up more resilient, over-engineered solutions, as long as they have strategies to mitigate the noise and dissipate the heat, and don’t mind the electricity bills.

    If it works, awesome ! As an old hand at this, I am rather stuck in my ways, however, preferring to do my own stunts, both with the hardware and software solutions.

    My History With Home NAS Setups
    In 1998, I bought myself a new computer — beige box tower PC, as was the style as the time. This was when normal people only had one computer at most. It ran Windows, but I was curious about this new thing called “Linux” and learned to dual boot that. Later that year, it dawned on me that nothing prevented me from buying a second ugly beige box PC and running Linux exclusively on it. Further, it could be a headless Linux box, connected by ethernet, and I could consolidate files into a single place using this file sharing software named Samba.

    I remember it being fairly onerous to get Samba working in those days. And the internet was not quite so helpful in those days. I recall that the thing that blocked me for awhile was needing to know that I had to specify an entry for the Samba server machine in the LMHOSTS (Lanman hosts) file on the Windows 95 machine.

    However, after I cracked that code, I have pretty much always had some kind of ad-hoc home NAS setup, often combined with a headless Linux development box.

    In the early 2000s, I built a new beige box PC for a file server, with a new hard disk, and a coworker tutored me on setting up a (P)ATA UDMA 133 (or was it 150 ? anyway, it was (P)ATA’s last hurrah before SATA conquered all) expansion card and I remember profiling that the attached hard drive worked at a full 21 MBytes/s reading. It was pretty slick. Except I hadn’t really thought things through. You see, I had a hand-me-down ethernet hub cast-off from my job at the time which I wanted to use. It was a 100 Mbps repeater hub, not a switch, so the catch was that all connected machines had to be capable of 100 Mbps. So, after getting all of my machines (3 at the time) upgraded to support 10/100 ethernet (the old off-brand PowerPC running Linux was the biggest challenge), I profiled transfers and realized that the best this repeater hub could achieve was about 3.6 MBytes/s. For a long time after that, I just assumed that was the upper limit of what a 100 Mbps network could achieve. Obviously, I now know that the upper limit ought to be around 11.2 MBytes/s and if I had gamed out that fact in advance, I would have realized it didn’t make sense to care about super-fast (for the time) disk performance.

    At this time, I was doing a lot for development for MPlayer/xine/FFmpeg. I stored all of my multimedia material on this NAS. I remember being confused when I was working with Y4M data, which is raw frames, which is lots of data. xine, which employed a pre-buffering strategy, would play fine for a few seconds and then stutter. Eventually, I reasoned out that the files I was working with had a data rate about twice what my awful repeater hub supported, which is probably the first time I came to really understand and respect streaming speeds and their implications for multimedia playback.

    Smaller Solutions
    For a period, I didn’t have a NAS. Then I got an Apple AirPort Extreme, which I noticed had a USB port. So I bought a dual drive brick to plug into it and used that for a time. Later (2009), I had this thing called the MSI Wind Nettop which is the only PC I’ve ever seen that can use a CompactFlash (CF) card for a boot drive. So I did just that, and installed a large drive so it could function as a NAS, as well as a headless dev box. I’m still amazed at what a low-power I/O beast this thing is, at least when compared to all the ARM SoCs I have tried in the intervening 1.5 decades. I’ve had spinning hard drives in this thing that could read at 160 MBytes/s (‘dd’ method) and have no trouble saturating the gigabit link at 112 MBytes/s, all with its early Intel Atom CPU.

    Around 2015, I wanted a more capable headless dev box and discovered Intel’s line of NUCs. I got one of the fat models that can hold a conventional 2.5″ spinning drive in addition to the M.2 SATA SSD and I was off and running. That served me fine for a few years, until I got into the ARM SBC scene. One major limitation here is that 2.5″ drives aren’t available in nearly the capacities that make a NAS solution attractive.

    Current Solution
    My current NAS solution, chronicled in my last SBC post– the ODroid-HC2, which is a highly compact ARM SoC with an integrated USB3-SATA bridge so that a SATA drive can be connected directly to it :


    ODROID-HC2 NAS

    ODROID-HC2 NAS


    I tend to be weirdly proficient at recalling dates, so I’m surprised that I can’t recall when I ordered this and put it into service. But I’m pretty sure it was circa 2018. It’s only equipped with an 8 TB drive now, but I seem to recall that it started out with only a 4 TB drive. I think I upgraded to the 8 TB drive early in the pandemic in 2020, when ISPs were implementing temporary data cap amnesty and I was doing what a r/DataHoarder does.

    The HC2 has served me well, even though it has a number of shortcomings for a hardware set chartered for NAS :

    1. While it has a gigabit ethernet port, it’s documented that it never really exceeds about 70 MBytes/s, due to the SoC’s limitations
    2. The specific ARM chip (Samsung Exynos 5422 ; more than a decade old as of this writing) lacks cryptography instructions, slowing down encryption if that’s your thing (e.g., LUKS)
    3. While the SoC supports USB3, that block is tied up for the SATA interface ; the remaining USB port is only capable of USB2 speeds
    4. 32-bit ARM, which prevented me from running certain bits of software I wanted to try (like Minio)
    5. Only 1 drive, so no possibility for RAID (again, if that’s your thing)

    I also love to brag on the HC2’s power usage : I once profiled the unit for a month using a Kill-A-Watt and under normal usage (with the drive spinning only when in active use). The unit consumed 4.5 kWh… in an entire month.

    New Solution
    Enter the ODroid-HC4 (I purchased mine from Ameridroid but Hardkernel works with numerous distributors) :


    ODroid-HC4 with 2 drives

    ODroid-HC4 with an SSD and a conventional drive


    I ordered this earlier in the year and after many months of procrastinating and obsessing over the best approach to take with its general usage, I finally have it in service as my new NAS. Comparing point by point with the HC2 :

    1. The gigabit ethernet runs at full speed (though a few things on my network run at 2.5 GbE now, so I guess I’ll always be behind)
    2. The ARM chip (Amlogic S905X3) has AES cryptography acceleration and handles all the LUKS stuff without breaking a sweat ; “cryptsetup benchmark” reports between 500-600 MBytes/s on all the AES variants
    3. The USB port is still only USB2, so no improvement there
    4. 64-bit ARM, which means I can run Minio to simulate block storage in a local dev environment for some larger projects I would like to undertake
    5. Supports 2 drives, if RAID is your thing

    How I Set It Up
    How to set up the drive configuration ? As should be apparent from the photo above, I elected for an SSD (500 GB) for speed, paired with a conventional spinning HDD (18 TB) for sheer capacity. I’m not particularly trusting of RAID. I’ve watched it fail too many times, on systems that I don’t even manage, not to mention that aforementioned RAID brick that I had attached to the Apple AirPort Extreme.

    I had long been planning to use bcache, the block caching interface for Linux, which can use the SSD as a speedy cache in front of the more capacious disk. There is also LVM cache, which is supposed to achieve something similar. And then I had to evaluate the trade-offs in whether I wanted write-back, write-through, or write-around configurations.

    This was all predicated on the assumption that the spinning drive would not be able to saturate the gigabit connection. When I got around to setting up the hardware and trying some basic tests, I found that the conventional HDD had no trouble keeping up with the gigabit data rate, both reading and writing, somewhat obviating the need for SSD acceleration using any elaborate caching mechanisms.

    Maybe that’s because I sprung for the WD Red Pro series this time, rather than the Red Plus ? I’m guessing that conventional drives do deteriorate over the years. I’ll find out.

    For the operating system, I stuck with my newest favorite Linux distro : DietPi. While HardKernel (parent of ODroid) makes images for the HC units, I had also used DietPi for the HC2 for the past few years, as it tends to stay more up to date.

    Then I rsync’d my data from HC2 -> HC4. It was only about 6.5 TB of total data but it took days as this WD Red Plus drive is only capable of reading at around 10 MBytes/s these days. Painful.

    For file sharing, I’m pretty sure most normal folks have nice web UIs in their NAS boxes which allow them to easily configure and monitor the shares. I know there are such applications I could set up. But I’ve been doing this so long, I just do a bare bones setup through the terminal. I installed regular Samba and then brought over my smb.conf file from the HC2. 1 by 1, I tested that each of the old shares were activated on the new NAS and deactivated on the old NAS. I also set up a new share for the SSD. I guess that will just serve as a fast I/O scratch space on the NAS.

    The conventional drive spins up and down. That’s annoying when I’m actively working on something but manage not to hit the drive for like 5 minutes and then an application blocks while the drive wakes up. I suppose I could set it up so that it is always running. However, I micro-manage this with a custom bash script I wrote a long time ago which logs into the NAS and runs the “date” command every 2 minutes, appending the output to a file. As a bonus, it also prints data rate up/down stats every 5 seconds. The spinning file (“nas-main/zz-keep-spinning/keep-spinning.txt”) has never been cleared and has nearly a quarter million lines. I suppose that implies that it has kept the drive spinning for 1/2 million minutes which works out to around 347 total days. I should compare that against the drive’s SMART stats, if I can remember how. The earliest timestamp in the file is from March 2018, so I know the HC2 NAS has been in service at least that long.

    For tasks, vintage cron still does everything I could need. In this case, that means reaching out to websites (like this one) and automatically backing up static files.

    I also have to have a special script for starting up. Fortunately, I was able to bring this over from the HC2 and tweak it. The data disks (though not boot disk) are encrypted. Those need to be unlocked and only then is it safe for the Samba and Minio services to start up. So one script does all that heavy lifting in the rare case of a reboot (this is the type of system that’s well worth having on a reliable UPS).

    Further Work
    I need to figure out how to use the OLED display on the NAS, and how to make it show something more useful than the current time and date, which is what it does in its default configuration with HardKernel’s own Linux distro. With DietPi, it does nothing by default. I’m thinking it should be able to show the percent usage of each of the 2 drives, at a minimum.

    I also need to establish a more responsible backup regimen. I’m way too lazy about this. Fortunately, I reason that I can keep the original HC2 in service, repurposed to accept backups from the main NAS. Again, I’m sort of micro-managing this since a huge amount of data isn’t worth backing up (remember the whole DataHoarder bit), but the most important stuff will be shipped off.

    The post Adventures In NAS first appeared on Breaking Eggs And Making Omelettes.

  • Revision 37455 : Une meilleure vérification... On ne disposait pas encore de l’id_orig ici

    20 avril 2010, par kent1@… — Log

    Une meilleure vérification... On ne disposait pas encore de l’id_orig ici