Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (49)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9966)

  • Gstreamer, x264enc "Redistribute latency..." error

    20 août 2021, par Jason

    I'm trying to setup a video pipeline with very limited bandwidth. I was able to do it with two raspberry pis using the below lines. The first is for the camera pi and the second is to watch stream :

    


    gst-launch-1.0 rpicamsrc preview=false !  'video/x-h264, width=800, height=600, framerate=30/1' ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000
gst-launch-1.0 udpsrc port=5000 ! gdpdepay ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink sync=false


    


    It works but I go over my bandwidth limit if there is movement. I'm not sure if there is a way to limit bandwidth by setting a parameter here :

    


    'video/x-h264, width=800, height=600, framerate=30/1'


    


    From what I can find online, I have to use something like x264enc. I've followed tutorials but I can't get x264enc to work. it always outputs "Redistribute latency..." on both machines when run and it stays there.

    


    I've tried using x264enc like follows :

    


    gst-launch-1.0 rpicamsrc preview=false !  'video/x-raw, width=800, height=600, framerate=30/1' ! x264enc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000
gst-launch-1.0 rpicamsrc preview=false !  'video/x-raw, width=800, height=600, framerate=30/1' ! x264enc tune=zerolatency ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000
gst-launch-1.0 rpicamsrc preview=false ! x264enc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000
gst-launch-1.0 rpicamsrc preview=false ! x264enc tune=zerolatency ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! udpsink host=YOUR_PC_IP port=5000


    


    Based on tutorials, I would think some of those should work. Other threads say that tune=zerolatency fixes my problem. At least the ones with same output of "Redistribute latency..." I don't know what I'm doing wrong.

    


    Any help would be appreciated. Thanks !

    


  • Detection of virtual background on video using ffmpeg

    28 juillet 2021, par Marcos G

    I am trying with ffmpeg to detect the use of a virtual background on a video, like the ones used in Google Meet. I got some ideas, but none seem to work :

    


      

    • Egde color : When using a chroma key you can get shades of green/blue on the outline of the subject. This is called spill, and can be detectable, but gets discarded by the fact that you dont need a chroma key to fake a background (example : Google Meet).
    • 


    • Still image : Most of virtual backgrounds are a still image pasted behind the subject. The problem with detecting this is that most of the real backgrounds also are like still images, without much movement.
    • 


    • Blur : When faking a background without a chroma key the outline of the subject becomes very blurry (this is more noticeable when the subject moves), but I can't find a way to detect it using ffmpeg.
    • 


    


    How can I do this ? I'm open to trying other tecniques such as AI.

    


    Thanks in advance.

    


  • Video from Android Camera muxed with libavformat not playable in all players and audio not synced

    19 janvier 2021, par Debrugger

    I'm using avformat to mux encoded video and audio received from Android into an mp4 file. The resulting file is playable through ffplay though it sometimes outputs "No Frame !" during playback. VLC kind of plays it back but with glitches that look like the effect when movement data for one video is combined with color data from another. The video player on my phone does not play it at all.

    


    On top of that audio is not properly synced, even though MediaCodec manages to produce a proper file with nothing more than the code below has available (ie. presentationTimeStamp in microseconds.

    


    This is my code (error checking omitted for clarity) :

    


    // Initializing muxer
AVStream *videoStream = avformat_new_stream(outputContext, nullptr);
videoStreamIndex = videoStream->index;

videoStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
videoStream->codecpar->codec_id = AV_CODEC_ID_H264;
videoStream->codecpar->bit_rate = bitrate;
videoStream->codecpar->width = width;
videoStream->codecpar->height = height;
videoStream->time_base.num = 1;
videoStream->time_base.den = 90000;

AVStream* audioStream = avformat_new_stream(outputContext, nullptr);
audioStreamIndex = audioStream->index;
audioStream->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
audioStream->codecpar->codec_id = AV_CODEC_ID_MP4ALS;
audioStream->codecpar->bit_rate = audiobitrate;
audioStream->codecpar->sample_rate = audiosampleRate;
audioStream->codecpar->channels = audioChannelCount;
audioStream->time_base.num = 1;
audioStream->time_base.den = 90000;

avformat_write_header(outputContext, &opts);

writtenAudio = writtenVideo = false;


// presentationTimeUs is the absolute timestamp when the encoded frame was received in Android code. 
// This is what is usually fed into MediaCodec
int writeVideoFrame(uint8_t *data, int size, int64_t presentationTimeUs) {
    AVPacket pkt;
    av_init_packet(&pkt);
    pkt.flags |= AV_PKT_FLAG_KEY; // I know setting this on every frame is wrong. When do I set it?
    pkt.data = data;
    pkt.size = size;
    pkt.dts = AV_NOPTS_VALUE;
    pkt.pts = presentationTimeUs;
    if (writtenVideo) { // since the timestamp is absolute we have to subtract the initial offset
        pkt.pts -= firstVideoPts;
    }
    // rescale from microseconds to the stream timebase
    av_packet_rescale_ts(&pkt, AVRational { 1, 1000000 }, outputContext->streams[videoStreamIndex]->time_base);
    pkt.dts = AV_NOPTS_VALUE;
    pkt.stream_index = videoStreamIndex;
    if (!writtenVideo) {
        AVStream* videoStream = outputContext->streams[videoStreamIndex];
        videoStream->start_time = pkt.pts;
        firstVideoPts = presentationTimeUs;
    }
    if (av_interleaved_write_frame(outputContext, &pkt) < 0) {
        return 1;
    }
    writtenVideo = true;
    return 0;
}

int writeAudioFrame(uint8_t *data, int size, int64_t presentationTimeUs) {
    AVPacket pkt;
    av_init_packet(&pkt);
    pkt.data = data;
    pkt.size = size;
    pkt.stream_index = audioStreamIndex;
    pkt.pts = presentationTimeUs;
    av_packet_rescale_ts(&pkt, AVRational { 1, 1000000}, outputContext->streams[audioStreamIndex]->time_base);
    pkt.flags |= AV_PKT_FLAG_KEY;
    pkt.dts = AV_NOPTS_VALUE;
    if (!writtenAudio) {
        outputContext->streams[audioStreamIndex]->start_time = pkt.pts;
    }
    if (av_interleaved_write_frame(outputContext, &pkt) < 0) {
        return 1;
    }
    writtenAudio = true;
    return 0;
}

void close() {
    av_write_trailer(outputContext);
    running = false;

    // cleanup AVFormatContexts etc
}


    


    I think I'm doing the same as shown in avformat docs and examples, and the produced video is somewhat usable (reencoding it with ffmpeg yields a working video). But some things must still be wrong.