Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (83)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

Sur d’autres sites (10446)

  • avformat/mov : fix seeking with HEVC open GOP files

    18 février 2022, par Clément Bœsch
    avformat/mov : fix seeking with HEVC open GOP files
    

    This was tested with medias recorded from an iPhone XR and an iPhone 13.

    Here is how a typical stream looks like in coding order :

    ┌────────┬─────┬─────┬──────────┐
    │ sample | PTS | DTS | keyframe |
    ├────────┼─────┼─────┼──────────┤
    ┊ ┊ ┊ ┊ ┊
    │ 53 │ 560 │ 510 │ No │
    │ 54 │ 540 │ 520 │ No │
    │ 55 │ 530 │ 530 │ No │
    │ 56 │ 550 │ 540 │ No │
    │ 57 │ 600 │ 550 │ Yes │
    │ * 58 │ 580 │ 560 │ No │
    │ * 59 │ 570 │ 570 │ No │
    │ * 60 │ 590 │ 580 │ No │
    │ 61 │ 640 │ 590 │ No │
    │ 62 │ 620 │ 600 │ No │
    ┊ ┊ ┊ ┊ ┊

    In composition/display order :

    ┌────────┬─────┬─────┬──────────┐
    │ sample | PTS | DTS | keyframe |
    ├────────┼─────┼─────┼──────────┤
    ┊ ┊ ┊ ┊ ┊
    │ 55 │ 530 │ 530 │ No │
    │ 54 │ 540 │ 520 │ No │
    │ 56 │ 550 │ 540 │ No │
    │ 53 │ 560 │ 510 │ No │
    │ * 59 │ 570 │ 570 │ No │
    │ * 58 │ 580 │ 560 │ No │
    │ * 60 │ 590 │ 580 │ No │
    │ 57 │ 600 │ 550 │ Yes │
    │ 63 │ 610 │ 610 │ No │
    │ 62 │ 620 │ 600 │ No │
    ┊ ┊ ┊ ┊ ┊

    Sample/frame 58, 59 and 60 are B-frames which actually depends on the
    key frame (57). Here the key frame is not an IDR but a "CRA" (Clean
    Random Access).

    Initially, I thought I could rely on the sdtp box (independent and
    disposable samples), but unfortunately :

    sdtp[54] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
    sdtp[55] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
    sdtp[56] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
    sdtp[57] is_leading:0 sample_depends_on:2 sample_is_depended_on:0 sample_has_redundancy:0
    sdtp[58] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
    sdtp[59] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
    sdtp[60] is_leading:0 sample_depends_on:1 sample_is_depended_on:2 sample_has_redundancy:0
    sdtp[61] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0
    sdtp[62] is_leading:0 sample_depends_on:1 sample_is_depended_on:0 sample_has_redundancy:0

    The information that might have been useful here would have been
    is_leading, but all the samples are set to 0 so this was unusable.

    Instead, we need to rely on sgpd/sbgp tables. In my case the video track
    contained 3 sgpd tables with the following grouping types : tscl, sync
    and tsas. In the sync table we have the following 2 entries (only) :

    sgpd.sync[1] : sync nal_unit_type:0x14
    sgpd.sync[2] : sync nal_unit_type:0x15

    (The count starts at 1 because 0 carries the undefined semantic, we'll
    see that later in the reference table).

    The NAL unit types presented here correspond to :

    libavcodec/hevc.h : HEVC_NAL_IDR_N_LP = 20,
    libavcodec/hevc.h : HEVC_NAL_CRA_NUT = 21,

    In parallel, the sbgp sync table contains the following :

    ┌────┬───────┬─────┐
    │ id │ count │ gdi │
    ├────┼───────┼─────┤
    │ 0 │ 1 │ 1 │
    │ 1 │ 56 │ 0 │
    │ 2 │ 1 │ 2 │
    │ 3 │ 59 │ 0 │
    │ 4 │ 1 │ 2 │
    │ 5 │ 59 │ 0 │
    │ 6 │ 1 │ 2 │
    │ 7 │ 59 │ 0 │
    │ 8 │ 1 │ 2 │
    │ 9 │ 59 │ 0 │
    │ 10 │ 1 │ 2 │
    │ 11 │ 11 │ 0 │
    └────┴───────┴─────┘

    The gdi column (group description index) directly refers to the index in
    the sgpd.sync table. This means the first frame is an IDR, then we have
    batches of undefined frames interlaced with CRA frames. No IDR ever
    appears again (tried on a 30+ seconds sample).

    With that information, we can build an heuristic using the presentation
    order.

    A few things needed to be introduced in this commit :

    1. min_sample_duration is extracted from the stts : we need the minimal
    step between sample in order to PTS-step backward to a valid point
    2. In order to avoid a loop over the ctts table systematically during a
    seek, we build an expanded list of sample offsets which will be used
    to translate from DTS to PTS
    3. An open_key_samples index to keep track of all the non-IDR key
    frames ; for now it only supports HEVC CRA frames. We should probably
    add BLA frames as well, but I don't have any sample so I prefered to
    leave that for later

    It is entirely possible I missed something obvious in my approach, but I
    couldn't come up with a better solution. Also, as mentioned in the diff,
    we could optimize is_open_key_sample(), but the linear scaling overhead
    should be fine for now since it only happens in seek events.

    Fixing this issue prevents sending broken packets to the decoder. With
    FFmpeg hevc decoder the frames are skipped, with VideoToolbox the frames
    are glitching.

    • [DH] libavformat/isom.h
    • [DH] libavformat/mov.c
  • corrupted HEIC tile when converting to JPEG

    27 mars 2018, par Kim Bowles Sørhus

    I’m having trouble converting a .HEIC image to a jpeg. The .HEIC file an image taken with an iphone running the latest ios public beta. I’m using the library nokia provided to parse the file and extract the image tiles from the .HEIC file, convert them to jpeg and glue them together using ffmpeg/montage.

    There is a bit too much code to paste it all into this question so i put all of it in this github repo. Its pretty self explanatory and should be runnable with just a few dependencies. They are explained in the repo’s README. This has all been done on osx btw.

    The .HEIC files contains a 8x6 grid of images(tiles) and if you put them together you get the complete image. Simply put whatever image i input the 7th tile is corrupted as shown below and i really don’t understand why. I’ve filed an issue with nokia, but the repo seems pretty dead and i don’t really expect an answer there.

  • How can I improve the up-time of my coffee pot live stream ?

    26 avril 2017, par tww0003

    Some Background on the Project :

    Like most software developers I depend on coffee to keep me running, and so do my coworkers. I had an old iPhone sitting around, so I decided to pay homage to the first webcam and live stream my office coffee pot.

    The stream has become popular within my company, so I want to make sure it will stay online with as little effort possible on my part. As of right now, it will occasionally go down, and I have to manually get it up and running again.

    My Setup :

    I have nginx set up on a digital ocean server (my nginx.conf is shown below), and downloaded an rtmp streaming app for my iPhone.

    The phone is set to stream to example.com/live/stream and then I use an ffmpeg command to take that stream, strip the audio (the live stream is public and I don’t want coworkers to feel like they have to be careful about what they say), and then make it accessible at rtmp://example.com/live/coffee and example.com/hls/coffee.m3u8.

    Since I’m not too familiar with ffmpeg, I had to google around and find the appropriate command to strip the coffee stream of the audio and I found this :

    ffmpeg -i rtmp://localhost/live/stream -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv -an rtmp://localhost/live/coffee

    Essentially all I know about this command is that the input stream comes from, localhost/live/stream, it strips the audio with -an, and then it outputs to rtmp://localhost/live/coffee.

    I would assume that ffmpeg -i rtmp://localhost/live/stream -an rtmp://localhost/live/coffee would have the same effect, but the page I found the command on was dealing with ffmpeg, and nginx, so I figured the extra parameters were useful.

    What I’ve noticed with this command is that it will error out, taking the live stream down. I wrote a small bash script to rerun the command when it stops, but I don’t think this is the best solution.

    Here is the bash script :

    while true;
    do
           ffmpeg -i rtmp://localhost/live/stream -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv -an rtmp://localhost/live/coffee
           echo 'Something went wrong. Retrying...'
           sleep 1
    done

    I’m curious about 2 things :

    1. What is the best way to strip audio from an rtmp stream ?
    2. What is the proper configuration for nginx to ensure that my rtmp stream will stay up for as long as possible ?

    Since I have close to 0 experience with nginx, ffmpeg, and rtmp streaming any help, or tips would be appreciated.

    Here is my nginx.conf file :

    worker_processes  1;

    events {
       worker_connections  1024;
    }


    http {
       include       mime.types;
       default_type  application/octet-stream;

       sendfile        on;

       keepalive_timeout  65;

       server {
           listen       80;
           server_name  localhost;

           location / {
               root   html;
               index  index.html index.htm;
           }

           error_page   500 502 503 504  /50x.html;
           location = /50x.html {
               root   html;
           }

           location /stat {
                   rtmp_stat all;
                   rtmp_stat_stylesheet stat.xsl;
                   allow 127.0.0.1;
           }
           location /stat.xsl {
                   root html;
           }
           location /hls {
                   root /tmp;
                   add_header Cache-Control no-cache;
           }
           location /dash {
                   root /tmp;
                   add_header Cache-Control no-cache;
                   add_header Access-Control-Allow-Origin *;
           }
       }
    }

    rtmp {

       server {

           listen 1935;
           chunk_size 4000;

           application live {
               live on;

               hls on;
               hls_path /tmp/hls;

               dash on;
               dash_path /tmp/dash;
           }
       }
    }

    edit :
    I’m also running into this same issue : https://trac.ffmpeg.org/ticket/4401