Recherche avancée

Médias (1)

Mot : - Tags -/vidéo

Autres articles (40)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (3748)

  • Loss packets from RTP streaming decoded with ffmpeg

    29 avril 2020, par anaVC94

    Hope someone could help me. I am trying to apply a neural network (YOLOv2) to perform object detection to a RTP stream which I have simulated in local using VLC, with the "using RTP over TCP" mark as true.
The stream is 4K, 30fps, 15Mb/s. I am using OpenCV C API I/O module to read frames from the stream.

    



    I am using the common code. I open the RTP as follows :
cap = cvCaptureFromFile(url);

    



    And then I perform capture in one thread like :
IplImage* src = cvQueryFrame(cap);

    



    and, on another thread, the detection part.

    



    I know OpenCV uses ffmpeg to capture video. I am using ffmpeg 3.3.2. My problem is that when I receive the stream, a lot of artifacts appear. The output I get is :

    



    top block unavailable for requested intra mode -1
[h264 @ 0000016ae36f0a40] error while decoding MB 40 0, bytestream 81024
[h264 @ 0000016ae32f3860] top block unavailable for requested intra mode
[h264 @ 0000016ae32f3860] error while decoding MB 48 0, bytestream 102909
[h264 @ 0000016ae35e9e60] Reference 3 >= 3
[h264 @ 0000016ae35e9e60] error while decoding MB 79 0, bytestream 27231
[h264 @ 0000016a7033eb40] mmco: unref short failure
[h264 @ 0000016ae473ee00] Reference 3 >= 3


    



    over and over again, and there are too many packet losses that I can't see anything when showing the received frames. However, it doesn't happen to me when I stream over RTP other lower quality videos such as HD in 30fps or like that. It is also true that the 4K has a lot of movement (it is a Moto GP Race).

    



    I have tried :
- Reduce the fps of the streaming.
- Reduce the bitrate 
- ffplay either doesn't show correctly the input frames, but VLC does (don't know why).
- Force TCP transmission.
- Force input fps doing cvSetCaptureProperty(cap, CV_CAP_PROP_FPS, 1);

    



    Why is this happening and how can I improve the packet losses ? Is there any way or something else I can try ?

    



    Thanks a lot

    


  • Running pulseaudio inside docker container to record system audio

    20 mars 2023, par XXLuigiMario

    I'm trying to set up a Docker container with Selenium that takes a recording of the browser with system audio using ffmpeg. I've got video working using Xvfb. Unfortunately, on the audio side, it seems to be more tricky.

    


    I thought I would set up a virtual pulseaudio sink inside the container, which would allow me to record its monitor :

    


    pacmd load-module module-null-sink sink_name=loopback
pacmd set-default-sink loopback
ffmpeg -f pulse -i loopback.monitor test.wav


    


    This works on my host operating system, but when trying to start the pulseaudio daemon in a container, it fails with the following message :

    


    E: [pulseaudio] module-console-kit.c: Unable to contact D-Bus system bus: org.freedesktop.DBus.Error.FileNotFound: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory


    


    This would seem to be related to a freedesktop service called dbus. I've tried installing it and starting its daemon, but I couldn't seem to get it to work properly.
I couldn't find much information on how to proceed from here. What am I missing for pulseaudio ? Perhaps there's an easier way to record the system audio inside a container ?

    


    My goal is not to record it from the host operating system, but to play the audio inside the browser and record it all inside the same container.

    


  • Muxing raw h264 + aac into mp4 file with av_interleaved_write_frame() returning 0 but the video is not playable

    3 avril 2020, par Jaw109

    I have a program [1] that muxing audio and video into mp4 file(in idividual worker thread, retrieving audio/video frame from a streaming daemon). The audio is perfectly played in VLC, but the video is not playable, VLC debug logs show the start-code of video frame is not found.

    



    I have another demuxing program [2] to retrieve all the frame to see what had happened. I found the video frame is modified

    



    00000001 674D0029... was modified into 00000019 674D0029... (framesize is 29)
00000001 68EE3C80... was modified into 00000004 68EE3C80... (framesize is 8)
00000001 65888010... was modified into 0002D56F 65888010... (framesize is 185715)
00000001 619A0101... was modified into 00003E1E 619A0101... (framesize is 15906)
00000001 619A0202... was modified into 00003E3C 619A0202... (framesize is 15936)
00000001 619A0303... was modified into 00003E1E 619A0303... (framesize is 15581)


    



    It seems like the h264 start-code was replaced with something like... frame-size. but why ? Is there anything I did wrongly ? (Any idea ? something flags ? AVPacket initialization ? AVPacket's data copy wrongly ?)

    



    [1] muxing program

    



    int go_on = 1;
std::mutex g_mutex;
AVStream* g_AudioStream = NULL;
AVStream* g_VideoStream = NULL;

int polling_ringbuffer(int stream_type);

int main(int argc, char** argv)
{

  AVFormatContext* pFmtCntx = avformat_alloc_context();
  avio_open(&pFmtCntx->pb, argv[1], AVIO_FLAG_WRITE);
  pFmtCntx->oformat = av_guess_format(NULL, argv[1], NULL);
  g_AudioStream = avformat_new_stream( pFmtCntx, NULL );
  g_VideoStream = avformat_new_stream( pFmtCntx, NULL );
  initAudioStream(g_AudioStream->codecpar);
  initVideoStream(g_VideoStream->codecpar);
  avformat_write_header(pFmtCntx, NULL);

  std::thread audio(polling_ringbuffer, AUDIO_RINGBUFFER);
  std::thread video(polling_ringbuffer, VIDEO_RINGBUFFER);

  audio.join();
  video.join();

  av_write_trailer(pFmtCntx);
  if ( pFmtCntx->oformat && !( pFmtCntx->oformat->flags & AVFMT_NOFILE ) && pFmtCntx->pb )
    avio_close( pFmtCntx->pb );
  avformat_free_context( g_FmtCntx );

  return 0;
}

int polling_ringbuffer(int stream_type)
{
  uint8_t* data = new uint8_t[1024*1024];
  int64_t timestamp = 0;
  int data_len = 0;
  while(go_on)
  {
    const std::lock_guard lock(g_mutex);
    data_len = ReadRingbuffer(stream_type, data, 1024*1024, &timestamp);

    AVPacket pkt = {0};
    av_init_packet(&pkt);
    pkt.data = data;
    pkt.size = data_len;

    static AVRational r = {1,1000};
    switch(stream_type)
    {
      case STREAMTYPE_AUDIO:
        pkt.stream_index = g_AudioStream->index;
        pkt.flags = 0;
        pkt.pts = av_rescale_q(timestamp, r, g_AudioStream->time_base);
        break;
      case STREAMTYPE_VIDEO:
        pkt.stream_index = g_VIDEOStream->index;
        pkt.flags = isKeyFrame(data, data_len)?AV_PKT_FLAG_KEY:0;
        pkt.pts = av_rescale_q(timestamp, r, g_VideoStream->time_base);
        break;
    }
    static int64_t lastPTS = 0;
    pkt.dts = pkt.pts;
    pkt.duration = (lastPTS==0)? 0 : (pkt.pts-lastPTS);
    lastPTS = pkt.pts;

    int ret = av_interleaved_write_frame(g_FmtCntx, &pkt);
    if(0!=ret)
      printf("[%s:%d] av_interleaved_write_frame():%d\n", __FILE__, __LINE__, ret);
  }

  return 0;
}


    



    [2] demuxing program

    



    int main(int argc, char** argv)
{
  AVFormatContext* pFormatCtx = avformat_alloc_context();
  AVPacket pkt;
  av_init_packet(&pkt);
  avformat_open_input(&pFormatCtx, argv[1], NULL, NULL);
  for(;;)
  {
    if (av_read_frame(pFormatCtx, &pkt) >= 0)
    {
      printf("[%d] %s (len:%d)\n", pkt.stream_index, BinToHex(pkt.data, MIN(64, pkt.size)), pkt.size );
    }
    else
      break;
  }

  avformat_close_input(&pFormatCtx);
  return 0;
}


    



    [3] Here are my environment

    



    Linux MY-RASP-4 4.14.98 #1 SMP Mon Jun 24 12:34:42 UTC 2019 armv7l GNU/Linux
ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.2.0 (GCC)

libavutil      56. 22.100 / 56. 22.100
libavcodec     58. 35.100 / 58. 35.100
libavformat    58. 20.100 / 58. 20.100
libavdevice    58.  5.100 / 58.  5.100
libavfilter     7. 40.101 /  7. 40.101
libswscale      5.  3.100 /  5.  3.100
libswresample   3.  3.100 /  3.  3.100
libpostproc    55.  3.100 / 55.  3.100