Recherche avancée

Médias (91)

Autres articles (99)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (10074)

  • Data Privacy in Business : A Risk Leading to Major Opportunities

    9 août 2022, par Erin — Privacy

    Data privacy in business is a contentious issue. 

    Claims that “big data is the new oil of the digital economy” and strong links between “data-driven personalisation and customer experience” encourage leaders to set up massive data collection programmes.

    However, many of these conversations downplay the magnitude of security, compliance and ethical risks companies face when betting too much on customer data collection. 

    In this post, we discuss the double-edged nature of privacy issues in business — the risk-ridden and the opportunity-driven. ​​

    3 Major Risks of Ignoring Data Privacy in Business

    As the old adage goes : Just because everyone else is doing it doesn’t make it right.

    Easy data accessibility and ubiquity of analytics tools make data consumer collection and processing sound like a “given”. But the decision to do so opens your business to a spectrum of risks. 

    1. Compliance and Legal Risks 

    Data collection and customer privacy are protected by a host of international laws including GDPR, CCPA, and regional regulations. Only 15% of countries (mostly developing ones) don’t have dedicated laws for protecting consumer privacy. 

    State of global data protection legislature via The UN

    Global legislature includes provisions on : 

    • Collectible data types
    • Allowed uses of obtained data 
    • Consent to data collection and online tracking 
    • Rights to request data removal 

    Personally identifiable information (PII) processing is prohibited or strictly regulated in most jurisdictions. Yet businesses repeatedly circumnavigate existing rules and break them on occasion.

    In Australia, for example, only 2% of brands use logos, icons or messages to transparently call out online tracking, data sharing or other specific uses of data at the sign-up stage. In Europe, around half of small businesses are still not fully GDPR-compliant — and Big Tech companies like Google, Amazon and Facebook can’t get a grip on their data collection practices even when pressed with horrendous fines. 

    Although the media mostly reports on compliance fines for “big names”, smaller businesses are increasingly receiving more scrutiny. 

    As Max Schrems, an Austrian privacy activist and founder of noyb NGO, explained in a Matomo webinar :

    “In Austria, my home country, there are a lot of €5,000 fines going out there as well [to smaller businesses]. Most of the time, they are just not reported. They just happen below the surface. [GDPR fines] are already a reality.”​

    In April 2022, the EU Court of Justice ruled that consumer groups can autonomously sue businesses for breaches of data protection — and nonprofit organisations like noyb enable more people to do so. 

    Finally, new data privacy legislation is underway across the globe. In the US, Colorado, Connecticut, Virginia and Utah have data protection acts at different stages of approval. South African authorities are working on the Protection of Personal Information Act (POPI) act and Brazil is working on a local General Data Protection Law (LGPD).

    Re-thinking your stance on user privacy and data protection now can significantly reduce the compliance burden in the future. 

    2. Security Risks 

    Data collection also mandates data protection for businesses. Yet, many organisations focus on the former and forget about the latter. 

    Lenient attitudes to consumer data protection resulted in a major spike in data breaches.

    Check Point research found that cyberattacks increased 50% year-over-year, with each organisation facing 925 cyberattacks per week globally.

    Many of these attacks end up being successful due to poor data security in place. As a result, billions of stolen consumer records become publicly available or get sold on dark web marketplaces.

    What’s even more troublesome is that stolen consumer records are often purchased by marketing firms or companies, specialising in spam campaigns. Buyers can also use stolen emails to distribute malware, stage phishing and other social engineering attacks – and harvest even more data for sale. 

    One business’s negligence creates a snowball effect of negative changes down the line with customers carrying the brunt of it all. 

    In 2020, hackers successfully targeted a Finnish psychotherapy practice. They managed to steal hundreds of patient records — and then demanded a ransom both from the firm and its patients for not exposing information about their mental health issues. Many patients refused to pay hackers and some 300 records ended up being posted online as Associated Press reported.

    Not only did the practice have to deal with the cyber-breach aftermath, but it also faced vocal regulatory and patient criticisms for failing to properly protect such sensitive information.

    Security negligence can carry both direct (heavy data breach fines) and indirect losses in the form of reputational damages. An overwhelming 90% of consumers say they wouldn’t buy from a business if it doesn’t adequately protect their data. This brings us to the last point. 

    3. Reputational Risks 

    Trust is the new currency. Data negligence and consumer privacy violations are the two fastest ways to lose it. 

    Globally, consumers are concerned about how businesses collect, use, and protect their data. 

    Consumer data sharing attitudes
    • According to Forrester, 47% of UK adults actively limit the amount of data they share with websites and apps. 49% of Italians express willingness to ask companies to delete their personal data. 36% of Germans use privacy and security tools to minimise online tracking of their activities. 
    • A GDMA survey also notes that globally, 82% of consumers want more control over their personal information, shared with companies. 77% also expect brands to be transparent about how their data is collected and used. 

    When businesses fail to hold their end of the bargain — collect just the right amount of data and use it with integrity — consumers are fast to cut ties. 

    Once the information about privacy violations becomes public, companies lose : 

    • Brand equity 
    • Market share 
    • Competitive positioning 

    An AON report estimates that post-data breach companies can lose as much as 25% of their initial value. In some cases, the losses can be even higher. 

    In 2015, British telecom TalkTalk suffered from a major data breach. Over 150,000 customer records were stolen by hackers. To contain the issue, TalkTalk had to throw between $60-$70 million into containment efforts. Still, they lost over 100,000 customers in a matter of months and one-third of their company value, equivalent to $1.4 billion, by the end of the year. 

    Fresher data from Infosys gives the following maximum cost estimates of brand damage, companies could experience after a data breach (accidental or malicious).

    Estimated cost of brand damage due to a data breach

    3 Major Advantages of Privacy in Business 

    Despite all the industry mishaps, a reassuring 77% of CEOs now recognise that their companies must fundamentally change their approaches to customer engagement, in particular when it comes to ensuring data privacy. 

    Many organisations take proactive steps to cultivate a privacy-centred culture and implement transparent data collection policies. 

    Here’s why gaining the “privacy advantage” pays off.

    1. Market Competitiveness 

    There’s a reason why privacy-focused companies are booming. 

    Consumers’ mounting concerns and frustrations over the lack of online privacy, prompt many to look for alternative privacy-centred products and services

    The following B2C and B2B products are moving from the industry margins to the mainstream : 

    Across the board, consumers express greater trust towards companies, protective of their privacy : 

    And as we well know : trust translates to higher engagement, loyalty, and – ultimately revenue. 

    By embedding privacy into the core of your product, you give users more reasons to select, stay and support your business. 

    2. Higher Operational Efficiency

    Customer data protection isn’t just a policy – it’s a culture of collecting “just enough” data, protecting it and using it responsibly. 

    Sadly, that’s the area where most organisations trail behind. At present, some 90% of businesses admit to having amassed massive data silos. 

    Siloed data is expensive to maintain and operationalise. Moreover, when left unattended, it can evolve into a pressing compliance issue. 

    A recently leaked document from Facebook says the company has no idea where all of its first-party, third-party and sensitive categories data goes or how it is processed. Because of this, Facebook struggles to achieve GDPR compliance and remains under regulatory pressure. 

    Similarly, Google Analytics is riddled with privacy issues. Other company products were found to be collecting and operationalising consumer data without users’ knowledge or consent. Again, this creates valid grounds for regulatory investigations. 

    Smaller companies have a better chance of making things right at the onset. 

    By curbing customer data collection, you can : 

    • Reduce data hosting and Cloud computation costs (aka trim your Cloud bill) 
    • Improve data security practices (since you would have fewer assets to protect) 
    • Make your staff more productive by consolidating essential data and making it easy and safe to access

    Privacy-mindful companies also have an easier time when it comes to compliance and can meet new data regulations faster. 

    3. Better Marketing Campaigns 

    The biggest counter-argument to reducing customer data collection is marketing. 

    How can we effectively sell our products if we know nothing about our customers ? – your team might be asking. 

    This might sound counterintuitive, but minimising data collection and usage can lead to better marketing outcomes. 

    Limiting the types of data that can be used encourages your people to become more creative and productive by focusing on fewer metrics that are more important.

    Think of it this way : Every other business uses the same targeting parameters on Facebook or Google for running paid ad campaigns on Facebook. As a result, we see ads everywhere — and people grow unresponsive to them or choose to limit exposure by using ad blocking software, private browsers and VPNs. Your ad budgets get wasted on chasing mirage metrics instead of actual prospects. 

    Case in point : In 2017 Marc Pritchard of Procter & Gamble decided to first cut the company’s digital advertising budget by 6% (or $200 million). Unilever made an even bolder move and reduced its ad budget by 30% in 2018. 

    Guess what happened ?

    P&G saw a 7.5% increase in organic sales and Unilever had a 3.8% gain as HBR reports. So how come both companies became more successful by spending less on advertising ? 

    They found that overexposure to online ads led to diminishing returns and annoyances among loyal customers. By minimising ad exposure and adopting alternative marketing strategies, the two companies managed to market better to new and existing customers. 

    The takeaway : There are more ways to engage consumers aside from pestering them with repetitive retargeting messages or creepy personalisation. 

    You can collect first-party data with consent to incrementally improve your product — and educate them on the benefits of your solution in transparent terms.

    Final Thoughts 

    The definitive advantage of privacy is consumers’ trust. 

    You can’t buy it, you can’t fake it, you can only cultivate it by aligning your external appearances with internal practices. 

    Because when you fail to address privacy internally, your mishaps will quickly become apparent either as social media call-outs or worse — as a security incident, a data breach or a legal investigation. 

    By choosing to treat consumer data with respect, you build an extra layer of protection around your business, plus draw in some banging benefits too. 

    Get one step closer to becoming a privacy-centred company by choosing Matomo as your web analytics solution. We offer robust privacy controls for ensuring ethical, compliant, privacy-friendly and secure website tracking. 

  • FFmpeg overlay positioning issue : Converting frontend center coordinates to FFmpeg top-left coordinates

    25 janvier, par tarun

    I'm building a web-based video editor where users can :

    


    Add multiple videos
Add images
Add text overlays with background color

    


    Frontend sends coordinates where each element's (x,y) represents its center position.
on click of the export button i want all data to be exported as one final video
on click i send the data to the backend like -

    


     const exportAllVideos = async () => {
    try {
      const formData = new FormData();
        
      
      const normalizedVideos = videos.map(video => ({
          ...video,
          startTime: parseFloat(video.startTime),
          endTime: parseFloat(video.endTime),
          duration: parseFloat(video.duration)
      })).sort((a, b) => a.startTime - b.startTime);

      
      for (const video of normalizedVideos) {
          const response = await fetch(video.src);
          const blobData = await response.blob();
          const file = new File([blobData], `${video.id}.mp4`, { type: "video/mp4" });
          formData.append("videos", file);
      }

      
      const normalizedImages = images.map(image => ({
          ...image,
          startTime: parseFloat(image.startTime),
          endTime: parseFloat(image.endTime),
          x: parseInt(image.x),
          y: parseInt(image.y),
          width: parseInt(image.width),
          height: parseInt(image.height),
          opacity: parseInt(image.opacity)
      }));

      
      for (const image of normalizedImages) {
          const response = await fetch(image.src);
          const blobData = await response.blob();
          const file = new File([blobData], `${image.id}.png`, { type: "image/png" });
          formData.append("images", file);
      }

      
      const normalizedTexts = texts.map(text => ({
          ...text,
          startTime: parseFloat(text.startTime),
          endTime: parseFloat(text.endTime),
          x: parseInt(text.x),
          y: parseInt(text.y),
          fontSize: parseInt(text.fontSize),
          opacity: parseInt(text.opacity)
      }));

      
      formData.append("metadata", JSON.stringify({
          videos: normalizedVideos,
          images: normalizedImages,
          texts: normalizedTexts
      }));

      const response = await fetch("my_flask_endpoint", {
          method: "POST",
          body: formData
      });

      if (!response.ok) {
        
          console.log('wtf', response);
          
      }

      const finalVideo = await response.blob();
      const url = URL.createObjectURL(finalVideo);
      const a = document.createElement("a");
      a.href = url;
      a.download = "final_video.mp4";
      a.click();
      URL.revokeObjectURL(url);

    } catch (e) {
      console.log(e, "err");
    }
  };


    


    the frontend data for each object that is text image and video we are storing it as an array of objects below is the Data strcutre for each object -

    


    // the frontend data for each
  const newVideo = {
      id: uuidv4(),
      src: URL.createObjectURL(videoData.videoBlob),
      originalDuration: videoData.duration,
      duration: videoData.duration,
      startTime: 0,
      playbackOffset: 0,
      endTime: videoData.endTime || videoData.duration,
      isPlaying: false,
      isDragging: false,
      speed: 1,
      volume: 100,
      x: window.innerHeight / 2,
      y: window.innerHeight / 2,
      width: videoData.width,
      height: videoData.height,
    };
    const newTextObject = {
      id: uuidv4(),
      description: text,
      opacity: 100,
      x: containerWidth.width / 2,
      y: containerWidth.height / 2,
      fontSize: 18,
      duration: 20,
      endTime: 20,
      startTime: 0,
      color: "#ffffff",
      backgroundColor: hasBG,
      padding: 8,
      fontWeight: "normal",
      width: 200,
      height: 40,
    };

    const newImage = {
      id: uuidv4(),
      src: URL.createObjectURL(imageData),
      x: containerWidth.width / 2,
      y: containerWidth.height / 2,
      width: 200,
      height: 200,
      borderRadius: 0,
      startTime: 0,
      endTime: 20,
      duration: 20,
      opacity: 100,
    };



    


    BACKEND CODE -

    


    import os
import shutil
import subprocess
from flask import Flask, request, send_file
import ffmpeg
import json
from werkzeug.utils import secure_filename
import uuid
from flask_cors import CORS


app = Flask(__name__)
CORS(app, resources={r"/*": {"origins": "*"}})



UPLOAD_FOLDER = 'temp_uploads'
if not os.path.exists(UPLOAD_FOLDER):
    os.makedirs(UPLOAD_FOLDER)


@app.route('/')
def home():
    return 'Hello World'


OUTPUT_WIDTH = 1920
OUTPUT_HEIGHT = 1080



@app.route('/process', methods=['POST'])
def process_video():
    work_dir = None
    try:
        work_dir = os.path.abspath(os.path.join(UPLOAD_FOLDER, str(uuid.uuid4())))
        os.makedirs(work_dir)
        print(f"Created working directory: {work_dir}")

        metadata = json.loads(request.form['metadata'])
        print("Received metadata:", json.dumps(metadata, indent=2))
        
        video_paths = []
        videos = request.files.getlist('videos')
        for idx, video in enumerate(videos):
            filename = f"video_{idx}.mp4"
            filepath = os.path.join(work_dir, filename)
            video.save(filepath)
            if os.path.exists(filepath) and os.path.getsize(filepath) > 0:
                video_paths.append(filepath)
                print(f"Saved video to: {filepath} Size: {os.path.getsize(filepath)}")
            else:
                raise Exception(f"Failed to save video {idx}")

        image_paths = []
        images = request.files.getlist('images')
        for idx, image in enumerate(images):
            filename = f"image_{idx}.png"
            filepath = os.path.join(work_dir, filename)
            image.save(filepath)
            if os.path.exists(filepath):
                image_paths.append(filepath)
                print(f"Saved image to: {filepath}")

        output_path = os.path.join(work_dir, 'output.mp4')

        filter_parts = []

        base_duration = metadata["videos"][0]["duration"] if metadata["videos"] else 10
        filter_parts.append(f'color=c=black:s={OUTPUT_WIDTH}x{OUTPUT_HEIGHT}:d={base_duration}[canvas];')

        for idx, (path, meta) in enumerate(zip(video_paths, metadata['videos'])):
            x_pos = int(meta.get("x", 0) - (meta.get("width", 0) / 2))
            y_pos = int(meta.get("y", 0) - (meta.get("height", 0) / 2))
            
            filter_parts.extend([
                f'[{idx}:v]setpts=PTS-STARTPTS,scale={meta.get("width", -1)}:{meta.get("height", -1)}[v{idx}];',
                f'[{idx}:a]asetpts=PTS-STARTPTS[a{idx}];'
            ])

            if idx == 0:
                filter_parts.append(
                    f'[canvas][v{idx}]overlay=x={x_pos}:y={y_pos}:eval=init[temp{idx}];'
                )
            else:
                filter_parts.append(
                    f'[temp{idx-1}][v{idx}]overlay=x={x_pos}:y={y_pos}:'
                    f'enable=\'between(t,{meta["startTime"]},{meta["endTime"]})\':eval=init'
                    f'[temp{idx}];'
                )

        last_video_temp = f'temp{len(video_paths)-1}'

        if video_paths:
            audio_mix_parts = []
            for idx in range(len(video_paths)):
                audio_mix_parts.append(f'[a{idx}]')
            filter_parts.append(f'{"".join(audio_mix_parts)}amix=inputs={len(video_paths)}[aout];')

        
        if image_paths:
            for idx, (img_path, img_meta) in enumerate(zip(image_paths, metadata['images'])):
                input_idx = len(video_paths) + idx
                
                
                x_pos = int(img_meta["x"] - (img_meta["width"] / 2))
                y_pos = int(img_meta["y"] - (img_meta["height"] / 2))
                
                filter_parts.extend([
                    f'[{input_idx}:v]scale={img_meta["width"]}:{img_meta["height"]}[img{idx}];',
                    f'[{last_video_temp}][img{idx}]overlay=x={x_pos}:y={y_pos}:'
                    f'enable=\'between(t,{img_meta["startTime"]},{img_meta["endTime"]})\':'
                    f'alpha={img_meta["opacity"]/100}[imgout{idx}];'
                ])
                last_video_temp = f'imgout{idx}'

        if metadata.get('texts'):
            for idx, text in enumerate(metadata['texts']):
                next_output = f'text{idx}' if idx < len(metadata['texts']) - 1 else 'vout'
                
                escaped_text = text["description"].replace("'", "\\'")
                
                x_pos = int(text["x"] - (text["width"] / 2))
                y_pos = int(text["y"] - (text["height"] / 2))
                
                text_filter = (
                    f'[{last_video_temp}]drawtext=text=\'{escaped_text}\':'
                    f'x={x_pos}:y={y_pos}:'
                    f'fontsize={text["fontSize"]}:'
                    f'fontcolor={text["color"]}'
                )
                
                if text.get('backgroundColor'):
                    text_filter += f':box=1:boxcolor={text["backgroundColor"]}:boxborderw=5'
                
                if text.get('fontWeight') == 'bold':
                    text_filter += ':font=Arial-Bold'
                
                text_filter += (
                    f':enable=\'between(t,{text["startTime"]},{text["endTime"]})\''
                    f'[{next_output}];'
                )
                
                filter_parts.append(text_filter)
                last_video_temp = next_output
        else:
            filter_parts.append(f'[{last_video_temp}]null[vout];')

        
        filter_complex = ''.join(filter_parts)

        
        cmd = [
            'ffmpeg',
            *sum([['-i', path] for path in video_paths], []),
            *sum([['-i', path] for path in image_paths], []),
            '-filter_complex', filter_complex,
            '-map', '[vout]'
        ]
        
        
        if video_paths:
            cmd.extend(['-map', '[aout]'])
        
        cmd.extend(['-y', output_path])

        print(f"Running ffmpeg command: {' '.join(cmd)}")
        result = subprocess.run(cmd, capture_output=True, text=True)
        
        if result.returncode != 0:
            print(f"FFmpeg error output: {result.stderr}")
            raise Exception(f"FFmpeg processing failed: {result.stderr}")

        return send_file(
            output_path,
            mimetype='video/mp4',
            as_attachment=True,
            download_name='final_video.mp4'
        )

    except Exception as e:
        print(f"Error in video processing: {str(e)}")
        return {'error': str(e)}, 500
    
    finally:
        if work_dir and os.path.exists(work_dir):
            try:
                print(f"Directory contents before cleanup: {os.listdir(work_dir)}")
                if not os.environ.get('FLASK_DEBUG'):
                    shutil.rmtree(work_dir)
                else:
                    print(f"Keeping directory for debugging: {work_dir}")
            except Exception as e:
                print(f"Cleanup error: {str(e)}")

                
if __name__ == '__main__':
    app.run(debug=True, port=8000)



    


    I'm also attaching what the final thing looks like on the frontend web vs in the downloaded video
and as u can see the downloaded video has all coords and positions messed up be it of the texts, images as well as videosdownloaded videos view
frontend web view

    


    can somebody please help me figure this out :)

    


  • Progress with rtc.io

    12 août 2014, par silvia

    At the end of July, I gave a presentation about WebRTC and rtc.io at the WDCNZ Web Dev Conference in beautiful Wellington, NZ.

    webrtc_talk

    Putting that talk together reminded me about how far we have come in the last year both with the progress of WebRTC, its standards and browser implementations, as well as with our own small team at NICTA and our rtc.io WebRTC toolbox.

    WDCNZ presentation page5

    One of the most exciting opportunities is still under-exploited : the data channel. When I talked about the above slide and pointed out Bananabread, PeerCDN, Copay, PubNub and also later WebTorrent, that’s where I really started to get Web Developers excited about WebRTC. They can totally see the shift in paradigm to peer-to-peer applications away from the Server-based architecture of the current Web.

    Many were also excited to learn more about rtc.io, our own npm nodules based approach to a JavaScript API for WebRTC.

    rtcio_modules

    We believe that the World of JavaScript has reached a critical stage where we can no longer code by copy-and-paste of JavaScript snippets from all over the Web universe. We need a more structured module reuse approach to JavaScript. Node with JavaScript on the back end really only motivated this development. However, we’ve needed it for a long time on the front end, too. One big library (jquery anyone ?) that does everything that anyone could ever need on the front-end isn’t going to work any longer with the amount of functionality that we now expect Web applications to support. Just look at the insane growth of npm compared to other module collections :

    Packages per day across popular platforms (Shamelessly copied from : http://blog.nodejitsu.com/npm-innovation-through-modularity/)

    For those that – like myself – found it difficult to understand how to tap into the sheer power of npm modules as a font end developer, simply use browserify. npm modules are prepared following the CommonJS module definition spec. Browserify works natively with that and “compiles” all the dependencies of a npm modules into a single bundle.js file that you can use on the front end through a script tag as you would in plain HTML. You can learn more about browserify and module definitions and how to use browserify.

    For those of you not quite ready to dive in with browserify we have prepared prepared the rtc module, which exposes the most commonly used packages of rtc.io through an “RTC” object from a browserified JavaScript file. You can also directly download the JavaScript file from GitHub.

    Using rtc.io rtc JS library
    Using rtc.io rtc JS library

    So, I hope you enjoy rtc.io and I hope you enjoy my slides and large collection of interesting links inside the deck, and of course : enjoy WebRTC ! Thanks to Damon, JEeff, Cathy, Pete and Nathan – you’re an awesome team !

    On a side note, I was really excited to meet the author of browserify, James Halliday (@substack) at WDCNZ, whose talk on “building your own tools” seemed to take me back to the times where everything was done on the command-line. I think James is using Node and the Web in a way that would appeal to a Linux Kernel developer. Fascinating !!

    The post Progress with rtc.io first appeared on ginger’s thoughts.