Recherche avancée

Médias (0)

Mot : - Tags -/page unique

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (5652)

  • How to use ffmpeg device to record screen

    25 janvier 2021, par alexandreleal

    I'm trying to record the screen using the ffmpeg-next wrapper (https://crates.io/crates/ffmpeg-next) and save the result to a mkv file. There's an example in the crate's github for remuxing (https://github.com/zmwangx/rust-ffmpeg/blob/master/examples/remux.rs) and from what I've read from ffmpeg's documentation, replacing the input context demuxer with a device that captures the screen would do the trick.

    


    The input context is declared like this :

    


    let mut ictx = format::input(&input_file).unwrap();


    


    Can I change this input context to a device that captures the screen ?

    


    Thanks !

    


  • Capture multiple individual streams ALSA FFMPEG

    3 novembre 2017, par user3170450

    There are similar questions available but I couldn’t understand them properly as per my use case. As I am new to ALSA so I would like to explain my use case first.

    I am opening my application via google chrome and I need to to capture the audio being played in that chrome window. I did it successfully by capturing the speaker sound with following command :

    ffmpeg -f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor -ac 1 -ar 16000 test.wav

    But the real problem is I will be opening multiple windows at the same time and I need to capture each audio stream separately. There is something like Virtual Audio device but I don’t know how to configure and use them for my use case.

    Please guide me in right direction.

    Just for a note my system has only one physical sound card available which I am capturing for one window.

  • How to make media recorder api individual chunks playable by it self

    10 août 2024, par tedGuy

    I'm trying to send individual chunks to the server instead of sending the whole chunks at once. This way, I have Ffmpeg on my Rails server to convert these chunks to HLS and upload them to S3 to stream the video instantly. However, I've encountered an issue the Media Recorder only provides playable chunks for the first segment after that, they are not playable and need to be concatenated to play.

    


    To avoid this, I've taken a different approach where I start a new Media Recorder every 3 seconds so that I get a playable chunk every time. However, this approach has its issues the video glitches a bit due to the delay when I stop and start a new Media Recorder. Is there a way to achieve this with ease ? This is my current status. please help !

    


    const startVideoRecording = async (
    screenStream: MediaStream,
    audioStream: MediaStream
  ) => {
    setStartingRecording(true);

    try {
      const res = await getVideoId();
      videoId.current = res;
    } catch (error) {
      console.log(error);
      return;
    }

    const outputStream = new MediaStream();
    outputStream.addTrack(screenStream.getVideoTracks()[0]);
    outputStream.addTrack(audioStream.getAudioTracks()[0]); // Add audio track

    const mimeTypes = [
      "video/webm;codecs=h264",
      "video/webm;codecs=vp9",
      "video/webm;codecs=vp8",
      "video/webm",
      "video/mp4",
    ];

    let selectedMimeType = "";
    for (const mimeType of mimeTypes) {
      if (MediaRecorder.isTypeSupported(mimeType)) {
        selectedMimeType = mimeType;
        break;
      }
    }

    if (!selectedMimeType) {
      console.error("No supported mime type found");
      return;
    }

    const videoRecorderOptions = {
      mimeType: selectedMimeType,
    };

    let chunkIndex = 0;

    const startNewRecording = () => {
      // Stop the current recorder if it's running
      if (
        videoRecorderRef.current &&
        videoRecorderRef.current.state === "recording"
      ) {
        videoRecorderRef.current.stop();
      }

      // Create a new MediaRecorder instance
      const newVideoRecorder = new MediaRecorder(
        outputStream,
        videoRecorderOptions
      );
      videoRecorderRef.current = newVideoRecorder;

      newVideoRecorder.ondataavailable = async (event) => {
        if (event.data.size > 0) {
          chunkIndex++;
          totalSegments.current++;
          const segmentIndex = totalSegments.current;
          console.log(event.data);
          handleUpload({ segmentIndex, chunk: event.data });
        }
      };

      // Start recording with a 3-second interval
      newVideoRecorder.start();
    };

    // Start the first recording
    startNewRecording();

    // Set up an interval to restart the recording every 3 seconds
    recordingIntervalIdRef.current = setInterval(() => {
      startNewRecording();
    }, 3000);

    setIsRecording(true);
    setStartingRecording(false);
  };