Recherche avancée

Médias (16)

Mot : - Tags -/mp3

Autres articles (105)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Formulaire personnalisable

    21 juin 2013, par

    Cette page présente les champs disponibles dans le formulaire de publication d’un média et il indique les différents champs qu’on peut ajouter. Formulaire de création d’un Media
    Dans le cas d’un document de type média, les champs proposés par défaut sont : Texte Activer/Désactiver le forum ( on peut désactiver l’invite au commentaire pour chaque article ) Licence Ajout/suppression d’auteurs Tags
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire. (...)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

Sur d’autres sites (9740)

  • Use data to develop impactful video content

    28 septembre 2021, par Ben Erskine — Analytics Tips, Plugins

    Creating impactful video content is at the heart of what you do. How you really engage with your audience, change behaviours and influence customers to complete your digital goals. But how do you create truly impactful marketing content ? By testing, trialling, analysing and ultimately tweaking and reacting to data-informed insights that gear your content to your audience (rather than simply producing great content and shooting arrows in the dark).

    Whether you want to know how many plays your video has, finish rates, how your video is consumed over time, how video was consumed on specific days or even which locations users are viewing your video content. Media Analytics will gather all of your video data in one place and provide answers to all of these questions (and much more).

    What is impactful video content ?

    Impactful video content grabs your audience’s attention, keeps their attention and promotes them to take measurable action. Be that time spent on your website, goal completion or brand engagement (including following, commenting or sharing on social). Maybe you’ve developed video content, had some really great results, but not consistently, nor every time and it can be difficult to identify what exactly it is that engages and entices each and every time. And we all want to find where that lovely sweet spot is for your audience.

    Embedded video on your website can be a marketing piece that talks about the benefits of your product. Or can be educational or informative that support the brand and overall impression of the brand. And at the very best entertaining at the same time. 

    84% of people say that they’ve been convinced to buy a product or service by watching a brand’s video. Building trust, knowledge and engagement are simply quicker with video. Viewers interact more, and are engaged longer with video, they are more likely to take in the message and trust what they are seeing through educational, informative or even entertaining video marketing content than solely through reading content on a website. And even better they take action, complete goals on your website and engage with your brand (potentially long term).

    It is not only necessary to have embedded video content on your website, it needs to deliver all the elements of a well functioning website, creating the very best user experience is essential to keeping your viewers engaged. This includes ensuring the video is quick to load, on-brand, expected (in format and tone) and easy to use and/or find. Ensuring that your video content is all of these things can mean that your website users will stick around longer on your website, spend more time exploring (and reading) your website and ultimately complete more of your goals. With a great user experience, your users, in turn, are more likely to come back again to your website and trust your brand. 

    All great reasons to create impactful video content that supports your website and brand ! And to analyse data around this behaviour to repeat (or better) the video content that really hits the mark.

    Let’s talk stats

    In terms of video marketing, there are stats to support that viewers retain 95% of a message when they view it in a video format. The psychology behind this should be fairly obvious. It is easier (and quicker) for humans to consume video and watch someone explain something than it is to read and take action. Simply look at the rise of YouTube for explanatory and instructional video content !

    And how about the 87% of marketers that report a positive ROI on using video in their marketing ? This number has steadily increased since 2015 and matches the increase in video views over the years. This should be enough to demonstrate that video marketing is the way forward, however it needs to be the right type of video to create impact and engagement.

    Do you need more reasons to consider honing and refining your video content for your audience ? And riding this wave of impactful video marketing success ?

    But, how do we do that ?

    So, how do you make content that consistently converts your audience to engaged customers ? The answer is in the numbers. The data. Collecting data on each and every piece of media that is produced and put out into the world. Measuring everything, from where it is viewed, how it is viewed, how much of it is viewed and what is your viewer’s action after the fact.

    While Vimeo and YouTube have their own video analytics they are each to their own, meaning a lot more work for you to combine and analyse your data before forming insights that are useful. 

    Your data is collected by external parties, and is owned and used by these platforms, for their own means. Using Web Analytics from Matomo to collect and collate media data can mean your robust data insights are all in one place. And you own the data, keeping your data private, clean and easy to digest. 

    Once your data is across a single platform, your time can be spent on analysing the data (rather than collating) and discovering those super valuable insights. Additionally, these insights can be collated and reported, in one place, and used to inform future digital and video marketing planning. Working with the data and alongside creative teams to produce video that talks to your audience in an impactful way.

    The more data that is collected the deeper the insights. Saving time and money across a single platform and with data-backed insights to inform decisions that can influence the time (and money) spent producing video content that truly hits the mark with your audience. No more wasted investment and firing into the dark without knowledge. 

    Interrogating the ideal length of your video media means it is more likely to be viewed to the end. Or understanding the play rate on your website of any video. How often is the video played ? And which is played more often ? Constant tweaking and updating of your video content planning can be informed by data-driven human-centric insights. By consistently tracking your media, analysing and forming insights you can build upon past work, and create a fuller picture of who your audience is and how they will engage with future video content. Understanding your media over time can lead to informed decisions that can impact the video content and the level of investment to deliver ROI that means something.

    Wrap Up

    Media Analytics puts you at the heart of video engagement. No more guessing at what your audience wants to see, how long or when. Make every piece of video content have the impact you want (and need) to drive engagement, goal completion and customer conversion. Create a user experience that keeps your users on your website for longer. Delivering on all of those delicious digital marketing goals and speaking the language of key stakeholders throughout the business. Back your digital marketing, with truly impactful content, and above all else deliver to your audience content that keeps them engaged and coming back for more.

    Don’t just take our word for it ! Take a look at what Matomo can offer you with streamlined and insightful Media Analytics, all in one place. And go forth and create impactful content, that matters.

    Next steps :

    Check out our detailed user guide to Media Analytics

    Or, if you have questions, see our helpful Video & Audio Analytics FAQ’s

  • How to detect delay or silence in an audio file ?

    15 juillet 2022, par Lynob

    I want to detect silence or delay in audio for a given duration file and remove it. For example, if someone started speaking and then paused for some duration to think.

    


    There's this question but it only detects the silence at the end and doesn't remove it. My colleague suggested sox but I'm not sure if it's the best tool for the job nor how to use it frankly, moreover, the project died in 2015.

    


  • How do terminal pipes in Python differ from those in Rust ?

    5 octobre 2022, par rust_convert

    To work on learning Rust (in a Tauri project) I am converting a Python 2 program that uses ffmpeg to create a custom video format from a GUI. The video portion converts successfully, but I am unable to get the audio to work. With the debugging I have done for the past few days, it looks like I am not able to read in the audio data in Rust correctly from the terminal pipe - what is working to read in the video data is not working for the audio. I have tried reading in the audio data as a string and then converting it to bytes but then the byte array appears empty. I have been researching the 'Pipe'-ing of data from the rust documentation and python documentation and am unsure how the Rust pipe could be empty or incorrect if it's working for the video.

    


    From this python article and this rust stack overflow exchange, it looks like the python stdout pipe is equivalent to the rust stdin pipe ?

    


    The python code snippet for video and audio conversion :

    


    output=open(self.outputFile, 'wb')
devnull = open(os.devnull, 'wb')

vidcommand = [ FFMPEG_BIN,
            '-i', self.inputFile,
            '-f', 'image2pipe',
            '-r', '%d' % (self.outputFrameRate),
            '-vf', scaleCommand,
            '-vcodec', 'rawvideo',
            '-pix_fmt', 'bgr565be',
            '-f', 'rawvideo', '-']
        
vidPipe = '';
if os.name=='nt' :
    startupinfo = sp.STARTUPINFO()
    startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10, startupinfo=startupinfo)
else:
    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10)

vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes)

audioCommand = [ FFMPEG_BIN,
    '-i', self.inputFile,
    '-f', 's16le',
    '-acodec', 'pcm_s16le',
    '-ar', '%d' % (self.outputAudioSampleRate),
    '-ac', '1',
    '-']

audioPipe=''
if (self.audioEnable.get() == 1):
    if os.name=='nt' :
        startupinfo = sp.STARTUPINFO()
        startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW
        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10, startupinfo=startupinfo)
    else:
        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10)

    audioFrame = audioPipe.stdout.read(self.audioFrameBytes) 

currentFrame=0;

while len(vidFrame)==self.inputVidFrameBytes:
    currentFrame+=1
    if(currentFrame%30==0):
        self.progressBarVar.set(100.0*(currentFrame*1.0)/self.totalFrames)
    if (self.videoBitDepth.get() == 16):
        output.write(vidFrame)
    else:
        b16VidFrame=bytearray(vidFrame)
        b8VidFrame=[]
        for p in range(self.outputVidFrameBytes):
            b8VidFrame.append(((b16VidFrame[(p*2)+0]>>0)&0xE0)|((b16VidFrame[(p*2)+0]<<2)&0x1C)|((b16VidFrame[(p*2)+1]>>3)&0x03))
        output.write(bytearray(b8VidFrame))

    vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes) # Read where vidframe is to match up with audio frame and output?
    if (self.audioEnable.get() == 1):


        if len(audioFrame)==self.audioFrameBytes:
            audioData=bytearray(audioFrame) 

            for j in range(int(round(self.audioFrameBytes/2))):
                sample = ((audioData[(j*2)+1]<<8) | audioData[j*2]) + 0x8000
                sample = (sample>>(16-self.outputAudioSampleBitDepth)) & (0x0000FFFF>>(16-self.outputAudioSampleBitDepth))

                audioData[j*2] = sample & 0xFF
                audioData[(j*2)+1] = sample>>8

            output.write(audioData)
            audioFrame = audioPipe.stdout.read(self.audioFrameBytes)

        else:
            emptySamples=[]
            for samples in range(int(round(self.audioFrameBytes/2))):
                emptySamples.append(0x00)
                emptySamples.append(0x00)
            output.write(bytearray(emptySamples))

self.progressBarVar.set(100.0)

vidPipe.terminate()
vidPipe.stdout.close()
vidPipe.wait()

if (self.audioEnable.get() == 1):
    audioPipe.terminate()
    audioPipe.stdout.close()
    audioPipe.wait()

output.close()


    


    The Rust snippet that should accomplish the same goals :

    


    let output_file = OpenOptions::new()
    .create(true)
    .truncate(true)
    .write(true)
    .open(&output_path)
    .unwrap();
let mut writer = BufWriter::with_capacity(
    options.video_frame_bytes.max(options.audio_frame_bytes),
    output_file,
);
let ffmpeg_path = sidecar_path("ffmpeg");
#[cfg(debug_assertions)]
let timer = Instant::now();

let mut video_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
video_cmd.args([
    "-i", options.path,
    "-f", "image2pipe",
    "-r", options.frame_rate,
    "-vf", options.scale,
    "-vcodec", "rawvideo",
    "-pix_fmt", "bgr565be",
    "-f", "rawvideo",
    "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

// windows creation flag CREATE_NO_WINDOW: stops the process from creating a CMD window
// https://docs.microsoft.com/en-us/windows/win32/procthread/process-creation-flags
#[cfg(windows)]
video_cmd.creation_flags(0x08000000);

let mut video_child = video_cmd.spawn().unwrap();
let mut video_stdout = video_child.stdout.take().unwrap();
let mut video_frame = vec![0; options.video_frame_bytes];

let mut audio_cmd = Command::new(&ffmpeg_path);
#[rustfmt::skip]
audio_cmd.args([
    "-i", options.path,
    "-f", "s16le",
    "-acodec", "pcm_s16le",
    "-ar", options.sample_rate,
    "-ac", "1",
    "-",
])
.stdin(Stdio::null())
.stdout(Stdio::piped())
.stderr(Stdio::null());

#[cfg(windows)]
audio_cmd.creation_flags(0x08000000);

let mut audio_child = audio_cmd.spawn().unwrap();
let mut audio_stdout = audio_child.stdout.take().unwrap();
let mut audio_frame = vec![0; options.audio_frame_bytes];

while video_stdout.read_exact(&mut video_frame).is_ok() {
    writer.write_all(&video_frame).unwrap();

    if audio_stdout.read_to_end(&mut audio_frame).is_ok() {
        if audio_frame.len() == options.audio_frame_bytes {
            for i in 0..options.audio_frame_bytes / 2 {
                let temp_sample = ((u32::from(audio_frame[(i * 2) + 1]) << 8)
                    | u32::from(audio_frame[i * 2]))
                    + 0x8000;
                let sample = (temp_sample >> (16 - 10)) & (0x0000FFFF >> (16 - 10));

                audio_frame[i * 2] = (sample & 0xFF) as u8;
                audio_frame[(i * 2) + 1] = (sample >> 8) as u8;
            }
        } else {
            audio_frame.fill(0x00);
        }
    }
    writer.write_all(&audio_frame).unwrap();
}


video_child.wait().unwrap();
audio_child.wait().unwrap();

#[cfg(debug_assertions)]
{
    let elapsed = timer.elapsed();
    dbg!(elapsed);
}

writer.flush().unwrap();


    


    I have looked at the hex data of the files using HxD - regardless of how I alter the Rust program, I am unable to get data different from what is previewed in the attached image - so the audio pipe is incorrectly interfaced. I included a screenshot of the hex data from the working python program that converts the video and audio correctly.

    


    HxD Python program hex output :

    


    HxD Python program hex output

    


    HxD Rust program hex output :

    


    HxD Rust program hex output