Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (38)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (6801)

  • How to process remote audio/video stream on WebRTC server in real-time ? [closed]

    7 septembre 2020, par Kartik Rokde

    I'm new to audio/video streaming. I'm using AntMedia Pro for audio/video conferencing. There will be 5-8 hosts who will be speaking and the expected audience size would be 15-20k (need to mention this as it won't be a P2P conferencing, but an MCU architecture).

    


    I want to give a feature where a user can request for "convert voice to female / robot / whatever", which would let the user hear the manipulated voice in the conference.

    


    From what I know is that I want to do a real-time processing on the server to be able to do this. I want to intercept the stream on the server, and do some processing (change the voice) on each of the tracks, and stream it back to the requestor.

    


    The first challenge I'm facing is how to get the stream and/or the individual tracks on the server ?

    


    I did some research on how to process remote WebRTC streams, real-time on the server. I came across some keywords like RTMP ingestion, ffmpeg.

    


    Here are a few questions I went through, but didn't find answers that I'm looking for :

    


      

    1. Receive webRTC video stream using python opencv in real-time
    2. 


    3. Extract frames as images from an RTMP stream in real-time
    4. 


    5. android stream real time video to streaming server
    6. 


    


    I need help in receiving real-time stream on the server (any technology - preferable Python, Golang) and streaming it back.

    


  • provide time period in ffmpeg drawtext filter

    25 janvier 2014, par ZafarYousafi

    I am trying to add text to video using ffmpeg and wants text to appear for a given period of time. I am trying to use DrawText filter but don't know how to provide time period for this filter. Can Anybody please help me.

    Thanks

  • av_read_frame and time stamps C++

    19 avril 2014, par halfwaythru

    I am recording an RTSP H264 stream from an Axis IP camera using libavformat. This camera is supposed to stamp every frame with the time that the frame was acquired, and this information is supposed to be in the RTP packet header.

    This is the code that I am using to read in the frames.

    AVFormatContext *inputFormatCtxt = NULL;
    avformat_open_input(&inputFormatCtxt, inputURL, NULL, NULL)
    avformat_find_stream_info(inputFormatCtxt, NULL )

    while(av_read_frame(inputFormatCtxt, &packet) >=0)
    {
       if(packet.stream_index == videoStreamIndex)
       {
          // Do something to video packet.
       }
       else
       {
          // Do something to audio packet.
       }

       if (packet.pts != AV_NOPTS_VALUE)
           packet.dts = packet.pts    = av_rescale_q(packet.pts, stream->time_base, oStream->time_base);
       else
           NSLog(@"packet.pts == AV_NOPTS_VALUE");

       if(av_interleaved_write_frame(outputFormatCtxt, &packet) < 0)
           NSLog(@"Could not write out frame.");

       av_free_packet(&packet);
    }

    Now in AVPacket, the only time-related information is the pts and the dts. After converting them into seconds, these are supposed to be the offset of the packet (in seconds) from the start of the stream.

    My question is : How do I get the start time of the stream ?

    These are the many things that I have tried :

    1.) In AVFormatContext there is a variable start_time_realtime that is "supposed" to be the start time of the stream in real world time, in microseconds. This is exactly what I need. But no matter what I do, this value is always 0, and never changes. Am I missing a step in initialization that this never get set ?

    2.) Looking at this link, I added an RTPDemuxContext object to my code :

    RTSPState* rtsp_state = (RTSPState*) inputFormatCtxt->priv_data;
    RTSPStream* rtsp_stream = rtsp_state->rtsp_streams[0];
    RTPDemuxContext* rtp_demux_context = (RTPDemuxContext*) rtsp_stream->transport_priv;

    When I tried to look at the last_rtcp_reception_time, last_rtcp_ntp_time, last_rtcp_timestamp timestamps within the RTPDemuxContext object, these values are also 0 always, and dont change.

    3.) With the last point, I tried to force fetch a packet using ff_rtsp_fetch_packet(inputFormatCtxt, &packet). This did update the RTPDemuxContext timestamps, but only while stepping through code. If I just ran the code in a loop, it always remained the same as whatever was in the RTDemuxContext object before the loop.

    int64_t x = 0;
    x = rtp_demux_context->last_rtcp_reception_time;   // x is 0.
    while(ff_rtsp_fetch_packet(inputFormatCtxt, &packet))
    {
       x = rtp_demux_context->last_rtcp_reception_time;   // x changes only when stepping through code. else remains 0
    }

    At this point I have no idea what I am doing wrong. I cant seem to get this timestamp information, no matter what I try. Any help is much appreciated. Thanks !