Recherche avancée

Médias (1)

Mot : - Tags -/illustrator

Autres articles (68)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (12032)

  • FFmpeg can not open video file after adding the GLsurfaceView to render frames

    4 avril 2016, par Kyle Lo

    The source code works perfectly without any modification.

    I successfully use the below function to play the specified video.

    playview.openVideoFile("/sdcard/Test/mv.mp4");

    And for the research purpose I need to display the frame by using OpenGL ES. So I remove the original method below.

    ANativeWindow* window = ANativeWindow_fromSurface(env, javaSurface);

    ANativeWindow_Buffer buffer;
    if (ANativeWindow_lock(window, &buffer, NULL) == 0) {
     memcpy(buffer.bits, pixels,  w * h * 2);
     ANativeWindow_unlockAndPost(window);
    }

    ANativeWindow_release(window);

    And I add FrameRenderer class into my project

    public class FrameRenderer implements GLSurfaceView.Renderer {

       public long time = 0;
       public short framerate = 0;
       public long fpsTime = 0;
       public long frameTime = 0;
       public float avgFPS = 0;
       private PlayNative mNative = null;

       @Override
       public void onSurfaceCreated(GL10 gl, EGLConfig config) {/*do nothing*/}

       @Override
       public void onSurfaceChanged(GL10 gl, int width, int height) {

       }

       @Override
       public void onDrawFrame(GL10 gl) {
           mNative.render();
       }

    In the native side I create a corresponding method in VideoPlay.cpp And I only use glClearColorto test if the OpenGL function works or not.

    void VideoPlay::render() {
       glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
       glClear(GL_COLOR_BUFFER_BIT);
    }

    And the onCreate is as below.

    protected void onCreate(Bundle savedInstanceState) {
           // TODO Auto-generated method stub
           super.onCreate(savedInstanceState);
           setContentView(R.layout.main_layout);

           playview = new PlayView(this);

           playview.openVideoFile("/sdcard/test_tt_racing.mp4");
           //playview.openVideoFile("/sdcard/big_buck_bunny.mp4");

           GLSurfaceView surface = (GLSurfaceView)findViewById(R.id.surfaceviewclass);
           surface.setRenderer(new FrameRenderer());
           ...

    Then test it on the mobile, the screen becomes red which means the GLSurfaceView and OpenGL works fine.

    But after I press the play bottom, whole the app stucked. And Show in the
    Log

    My question is why I can open the video whose path is totally the same with the previous one, just after I added the GLsurface renderer and how can I fix it ?

  • How to have multiple websocket RTSP streams ?

    6 octobre 2020, par kinx

    After spending some time reading various open-source projects on how to develop RTSP and WebSocket streams, I've almost built a simple project that allows me to display multiple streams on the page.

    


    I have a working example of just one stream working with the code below. A single URL in an array is sent to the client via WebSocket and with JSMPeg, it displays it with some success.

    


    However, I'm not sure how to build this where I have multiple sockets with an RTSP stream in each one and how to give each socket url it's own id. The idea is to encrypt the URL and when the client requests the list of streams, send back that as a socket id, and with JSMPeg, request that data.

    


    Server :

    


    class Stream extends EventEmitter {
  constructor() {
    super();
    this.urls = ["rtsp://someIPAddress:554/1"];
    this.urls.map((url) => {
      this.start(url);
    });
  }
  start(url) {
    this.startStream(url);
  }
  setOptions(url) {
    const options = {
      "-rtsp_transport": "tcp",
      "-i": url,
      "-f": "mpegts",
      "-codec:v": "mpeg1video",
      "-codec:a": "mp2",
      "-stats": "",
      "-b:v": "1500k",
      "-ar": "44100",
      "-r": 30,
    };
    let params = [];
    for (let key in options) {
      params.push(key);
      if (String(options[key]) !== "") {
        params.push(String(options[key]));
      }
    }
    params.push("-");
    return params;
  }
  startStream(url) {
    const wss = new WebSocket.Server({ port: 8080 });
    this.child = child_process.spawn("ffmpeg", this.setOptions(url));
    this.child.stdout.on("data", (data) => {
      wss.clients.forEach((client) => {
        client.send(data);
      });
      return this.emit("data", data);
    });
  }
}

const s = new Stream();
s.on("data", (data) => {
  console.log(data);
});


    


    In the constructor, there's an array of URLs, while I only have one here, i'd like to add multiple. I create a websocket and send that back. What I'd like to do is encrypt that URL with Crypto.createHash('md5').update(url).digest('hex') to give it it's own ID and create a websocket based on that id, send the data to that websocket and send that with a list of other id's to the client.

    


    client :

    


      <canvas style="width: 100%"></canvas>&#xA;  <code class="echappe-js">&lt;script type=&quot;text/javascript&quot;&gt;&amp;#xA;    var player = new JSMpeg.Player(&quot;ws://localhost:8080&quot;, {&amp;#xA;      loop: true,&amp;#xA;      autoplay: true,&amp;#xA;      canvas: document.getElementById(&quot;video&quot;),&amp;#xA;    });&amp;#xA;  &lt;/script&gt;&#xA;

    &#xA;

    What I'd like to do here is request from /api/streams and get back an array of streams/socket id's and request them from the array.

    &#xA;

    But how do I open up multiple sockets with multiple URLs ?

    &#xA;

  • (ffmpeg) How to sync dshow inputs, dropping frames, and -rtbufsize [closed]

    29 juillet 2021, par Zach Fleeman

    I wrote a quick batch script to capture anything from my Elgato HD60 Pro capture card, and while it works in some ways, I don't really understand how certain parameters are affecting my capture.

    &#xA;

    Whenever I run this command without the -rtbufsize 2048M -thread_queue_size 5096 params, I drop a ton of frames. I only added those params with those values because I found them on another stackoverflow thread. I wouldn't mind actually knowing what these do, and how I can fine-tune them for my script.

    &#xA;

    ffmpeg.exe -y -rtbufsize 2048M -thread_queue_size 5096 -fflags &#x2B;igndts ^&#xA;-f dshow -i video="Game Capture HD60 Pro":audio="Game Capture HD60 Pro Audio" ^&#xA;-filter:v "crop=1410:1080:255:0, scale=706x540" ^&#xA;-c:v libx264 -preset veryfast -b:v 1500k -pix_fmt yuv420p ^&#xA;-c:a aac ^&#xA;-f tee -map 0:v -map 0:a "%mydate%_%mytime%_capture.mp4|[f=flv]rtmp://xxx.xxx.xxx.xxx/live"&#xA;

    &#xA;

    In Open Broadcaster Software, my Elgato is a near-instant video feed, but this captures/streams things at a 3-ish second delay, which is okay until I work on this second command. I'm using gdigrab to capture the window from LiveSplit for my speedrunning, but I can't get the video streams to be synced up. I tried adding and modifying another -rtbufsize before the gdigrab input, but again, I'm not sure if this is what I need to do to delay the LiveSplit grab. It seems to always be 2 to 3 seconds ahead of my capture card. How can I get these inputs to be synced and react at the same time ? i.e., I start the timer in LiveSplit at the same time that I hit a button on my super nintendo.

    &#xA;

    ffmpeg.exe -y -rtbufsize 750M -thread_queue_size 5096 ^&#xA;-f dshow  -i video="Game Capture HD60 Pro":audio="Game Capture HD60 Pro Audio" ^&#xA;-rtbufsize 2000M -thread_queue_size 5096 ^&#xA;-f gdigrab -r 60 -i title=LiveSplit ^&#xA;-filter_complex "[0:v][0:v]overlay=255:0 [game];[game][1:v]overlay=0:40 [v]" ^&#xA;-c:v libx264 -preset veryfast -b:v 1500k -pix_fmt yuv420p ^&#xA;-c:a aac ^&#xA;-f tee -map "[v]" -map 0:a "%mydate%_%mytime%_capture.mp4|[f=flv]rtmp://192.168.1.7/live"&#xA;

    &#xA;

    tl ;dr&#xA;Where should I put -rtbufsize ? What value should it be ? And how about -thread_queue_size ? Are these things that I have to specify once or multiple times for each input ? How can I get my different input sources to sync up ?

    &#xA;

    p.s., I'm cropping and overlaying my Elgato inputs because my capture card does 1920x1080, but my video is most likely a 4:3-ish SNES/NES game.

    &#xA;