Recherche avancée

Médias (91)

Autres articles (112)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Demande de création d’un canal

    12 mars 2010, par

    En fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
    Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

Sur d’autres sites (11525)

  • av_read_frame and time stamps C++

    19 avril 2014, par halfwaythru

    I am recording an RTSP H264 stream from an Axis IP camera using libavformat. This camera is supposed to stamp every frame with the time that the frame was acquired, and this information is supposed to be in the RTP packet header.

    This is the code that I am using to read in the frames.

    AVFormatContext *inputFormatCtxt = NULL;
    avformat_open_input(&inputFormatCtxt, inputURL, NULL, NULL)
    avformat_find_stream_info(inputFormatCtxt, NULL )

    while(av_read_frame(inputFormatCtxt, &packet) >=0)
    {
       if(packet.stream_index == videoStreamIndex)
       {
          // Do something to video packet.
       }
       else
       {
          // Do something to audio packet.
       }

       if (packet.pts != AV_NOPTS_VALUE)
           packet.dts = packet.pts    = av_rescale_q(packet.pts, stream->time_base, oStream->time_base);
       else
           NSLog(@"packet.pts == AV_NOPTS_VALUE");

       if(av_interleaved_write_frame(outputFormatCtxt, &packet) < 0)
           NSLog(@"Could not write out frame.");

       av_free_packet(&packet);
    }

    Now in AVPacket, the only time-related information is the pts and the dts. After converting them into seconds, these are supposed to be the offset of the packet (in seconds) from the start of the stream.

    My question is : How do I get the start time of the stream ?

    These are the many things that I have tried :

    1.) In AVFormatContext there is a variable start_time_realtime that is "supposed" to be the start time of the stream in real world time, in microseconds. This is exactly what I need. But no matter what I do, this value is always 0, and never changes. Am I missing a step in initialization that this never get set ?

    2.) Looking at this link, I added an RTPDemuxContext object to my code :

    RTSPState* rtsp_state = (RTSPState*) inputFormatCtxt->priv_data;
    RTSPStream* rtsp_stream = rtsp_state->rtsp_streams[0];
    RTPDemuxContext* rtp_demux_context = (RTPDemuxContext*) rtsp_stream->transport_priv;

    When I tried to look at the last_rtcp_reception_time, last_rtcp_ntp_time, last_rtcp_timestamp timestamps within the RTPDemuxContext object, these values are also 0 always, and dont change.

    3.) With the last point, I tried to force fetch a packet using ff_rtsp_fetch_packet(inputFormatCtxt, &packet). This did update the RTPDemuxContext timestamps, but only while stepping through code. If I just ran the code in a loop, it always remained the same as whatever was in the RTDemuxContext object before the loop.

    int64_t x = 0;
    x = rtp_demux_context->last_rtcp_reception_time;   // x is 0.
    while(ff_rtsp_fetch_packet(inputFormatCtxt, &packet))
    {
       x = rtp_demux_context->last_rtcp_reception_time;   // x changes only when stepping through code. else remains 0
    }

    At this point I have no idea what I am doing wrong. I cant seem to get this timestamp information, no matter what I try. Any help is much appreciated. Thanks !

  • provide time period in ffmpeg drawtext filter

    25 janvier 2014, par ZafarYousafi

    I am trying to add text to video using ffmpeg and wants text to appear for a given period of time. I am trying to use DrawText filter but don't know how to provide time period for this filter. Can Anybody please help me.

    Thanks

  • How to process remote audio/video stream on WebRTC server in real-time ? [closed]

    7 septembre 2020, par Kartik Rokde

    I'm new to audio/video streaming. I'm using AntMedia Pro for audio/video conferencing. There will be 5-8 hosts who will be speaking and the expected audience size would be 15-20k (need to mention this as it won't be a P2P conferencing, but an MCU architecture).

    


    I want to give a feature where a user can request for "convert voice to female / robot / whatever", which would let the user hear the manipulated voice in the conference.

    


    From what I know is that I want to do a real-time processing on the server to be able to do this. I want to intercept the stream on the server, and do some processing (change the voice) on each of the tracks, and stream it back to the requestor.

    


    The first challenge I'm facing is how to get the stream and/or the individual tracks on the server ?

    


    I did some research on how to process remote WebRTC streams, real-time on the server. I came across some keywords like RTMP ingestion, ffmpeg.

    


    Here are a few questions I went through, but didn't find answers that I'm looking for :

    


      

    1. Receive webRTC video stream using python opencv in real-time
    2. 


    3. Extract frames as images from an RTMP stream in real-time
    4. 


    5. android stream real time video to streaming server
    6. 


    


    I need help in receiving real-time stream on the server (any technology - preferable Python, Golang) and streaming it back.