Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (103)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Monitoring de fermes de MediaSPIP (et de SPIP tant qu’à faire)

    31 mai 2013, par

    Lorsque l’on gère plusieurs (voir plusieurs dizaines) de MediaSPIP sur la même installation, il peut être très pratique d’obtenir d’un coup d’oeil certaines informations.
    Cet article a pour but de documenter les scripts de monitoring Munin développés avec l’aide d’Infini.
    Ces scripts sont installés automatiquement par le script d’installation automatique si une installation de munin est détectée.
    Description des scripts
    Trois scripts Munin ont été développés :
    1. mediaspip_medias
    Un script de (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (10137)

  • How to visualize matplotlib animation in Jupyter notebook

    23 avril 2020, par anonymous13

    I am trying to create a racing bar chart similar to the one in the link (https://towardsdatascience.com/bar-chart-race-in-python-with-matplotlib-8e687a5c8a41). 
However I am unable to see the animation in my Jupyter notebook

    



    code

    



    import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.animation as animation
from IPython.display import HTML

df = pd.read_csv('https://gist.githubusercontent.com/johnburnmurdoch/4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations', 
                 usecols=['name', 'group', 'year', 'value'])

current_year = 2018
dff = (df[df['year'].eq(current_year)]
       .sort_values(by='value', ascending=True)
       .head(10))

colors = dict(zip(
    ['India', 'Europe', 'Asia', 'Latin America',
     'Middle East', 'North America', 'Africa'],
    ['#adb0ff', '#ffb3ff', '#90d595', '#e48381',
     '#aafbff', '#f7bb5f', '#eafb50']
))
group_lk = df.set_index('name')['group'].to_dict()


fig, ax = plt.subplots(figsize=(15, 8))
def draw_barchart(year):
    dff = df[df['year'].eq(year)].sort_values(by='value', ascending=True).tail(10)
    ax.clear()
    ax.barh(dff['name'], dff['value'], color=[colors[group_lk[x]] for x in dff['name']])
    dx = dff['value'].max() / 200
    for i, (value, name) in enumerate(zip(dff['value'], dff['name'])):
        ax.text(value-dx, i,     name,           size=14, weight=600, ha='right', va='bottom')
        ax.text(value-dx, i-.25, group_lk[name], size=10, color='#444444', ha='right', va='baseline')
        ax.text(value+dx, i,     f'{value:,.0f}',  size=14, ha='left',  va='center')
    # ... polished styles
    ax.text(1, 0.4, year, transform=ax.transAxes, color='#777777', size=46, ha='right', weight=800)
    ax.text(0, 1.06, 'Population (thousands)', transform=ax.transAxes, size=12, color='#777777')
    ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
    ax.xaxis.set_ticks_position('top')
    ax.tick_params(axis='x', colors='#777777', labelsize=12)
    ax.set_yticks([])
    ax.margins(0, 0.01)
    ax.grid(which='major', axis='x', linestyle='-')
    ax.set_axisbelow(True)
    ax.text(0, 1.12, 'The most populous cities in the world from 1500 to 2018',
            transform=ax.transAxes, size=24, weight=600, ha='left')
    ax.text(1, 0, 'by @pratapvardhan; credit @jburnmurdoch', transform=ax.transAxes, ha='right',
            color='#777777', bbox=dict(facecolor='white', alpha=0.8, edgecolor='white'))
    plt.box(False)

draw_barchart(2018)

import matplotlib.animation as animation
from IPython.display import HTML
fig, ax = plt.subplots(figsize=(15, 8))
animator = animation.FuncAnimation(fig, draw_barchart, frames=range(1968, 2019))
HTML(animator.to_jshtml()) 



    



    Below is what I tried using and the errors

    



    HTML(animator.to_jshtml())  <-- Static output with buttons unable to visualize animation
plt.rcParams["animation.html"] = "jshtml"  <- no error and output
HTML(animator.to_html5_video())  <---Requested MovieWriter (ffmpeg) not available 



    



    Note I have FFmpeg installed in my system.
Can you help me with the issue

    


  • lavc : Implement Dolby Vision RPU parsing

    3 janvier 2022, par Niklas Haas
    lavc : Implement Dolby Vision RPU parsing
    

    Based on a mixture of guesswork, partial documentation in patents, and
    reverse engineering of real-world samples. Confirmed working for all the
    samples I've thrown at it.

    Contains some annoying machinery to persist these values in between
    frames, which is needed in theory even though I've never actually seen a
    sample that relies on it in practice. May or may not work.

    Since the distinction matters greatly for parsing the color matrix
    values, this includes a small helper function to guess the right profile
    from the RPU itself in case the user has forgotten to forward the dovi
    configuration record to the decoder. (Which in practice, only ffmpeg.c
    and ffplay do..)

    Notable omissions / deviations :
    - CRC32 verification. This is based on the MPEG2 CRC32 type, which is
    similar to IEEE CRC32 but apparently different in subtle enough ways
    that I could not get it to pass verification no matter what parameters
    I fed to av_crc. It's possible the code needs some changes.
    - Linear interpolation support. Nothing documents this (beyond its
    existence) and no samples use it, so impossible to implement.
    - All of the extension metadata blocks, but these contain values that
    seem largely congruent with ST2094, HDR10, or other existing forms of
    side data, so I will defer parsing/attaching them to a future commit.
    - The patent describes a mechanism for predicting coefficients from
    previous RPUs, but the bit for the flag whether to use the
    prediction deltas or signal entirely new coefficients does not seem to
    be present in actual RPUs, so we ignore this subsystem entirely.
    - In the patent's spec, the NLQ subsystem also loops over
    num_nlq_pivots, but even in the patent the number is hard-coded to one
    iteration rather than signalled. So we only store one set of coefs.

    Heavily influenced by https://github.com/quietvoid/dovi_tool
    Documentation drawn from US Patent 10,701,399 B2 and ETSI GS CCM 001

    Signed-off-by : Niklas Haas <git@haasn.dev>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] configure
    • [DH] libavcodec/Makefile
    • [DH] libavcodec/dovi_rpu.c
    • [DH] libavcodec/dovi_rpu.h
  • How to compensate frame rate underrun while muxing video to mp4 container with libav

    17 septembre 2021, par Nuno Santos

    I have a process that generates video frames in real time. I’m muxing the generated video frames stream in a video file (x264 codec on a mp4 container).

    &#xA;

    I'm using ffmpeg-libav and I'm basing myself on the muxing.c example. The problem with the example is that isn't a real world scenario as frames are being generated on a while loop for a given stream duration, never missing a frame.

    &#xA;

    On my program, frames are supposed to be generated at FPS, however, depending on the hardware capacity it might produce less than FPS. When I initialize the video stream context I declare that frame rate is FPS :

    &#xA;

    AVRational r = { 1, FPS };&#xA;ost->st->time_base = r;&#xA;

    &#xA;

    This specifies that the video is going to have FPS frame rate but if less frames are produced, the playback will be faster because it will still reproduce the video as it if had all the declared frames per second.

    &#xA;

    After googling a lot about this topic I understand that the key to fix this is to manipulate pts and dts but I still haven't found a solution that works.

    &#xA;

    There are two key functions when writing video frames in the muxing.c example, routines that I'm using in my program :

    &#xA;

    AVFrame* get_video_frame(int timestamp, OutputStream *ost, const QImage &amp;image)&#xA;{&#xA;    /* when we pass a frame to the encoder, it may keep a reference to it&#xA;     * internally; make sure we do not overwrite it here */&#xA;    if (av_frame_make_writable(ost->frame) &lt; 0)&#xA;        exit(1);&#xA;&#xA;    av_image_fill_arrays(ost->tmp_frame->data, ost->tmp_frame->linesize, image.bits(), AV_PIX_FMT_RGBA, ost->frame->width, ost->frame->height, 8);&#xA;    libyuv::ABGRToI420(ost->tmp_frame->data[0], ost->tmp_frame->linesize[0], ost->frame->data[0], ost->frame->linesize[0], ost->frame->data[1], ost->frame->linesize[1], ost->frame->data[2], ost->frame->linesize[2], ost->tmp_frame->width, -ost->tmp_frame->height);&#xA;&#xA;    #if 1 // this is my attempt to rescale pts, but crashes with ptsframe->pts = av_rescale_q(timestamp, AVRational{1, 1000}, ost->st->time_base);&#xA;    #else&#xA;    ost->frame->pts = ost->next_pts&#x2B;&#x2B;;&#xA;    #endif&#xA;&#xA;    return ost->frame;&#xA;}&#xA;

    &#xA;

    On the original code, the pts is simply an incremeting integer for each frame. What I'm trying to do is to pass a timestamp in ms since the beggining of the recording so that I can rescale the pts. When I rescale pts the program crashes complaining that pts is lower then dts.

    &#xA;

    From what I've been reading, the pts/dts manipulation is supposed to be done at the packet level so I have also tried to manipulate things on write_frame routine without success.

    &#xA;

    int write_frame(AVFormatContext *fmt_ctx, AVCodecContext *c, AVStream *st, AVFrame *frame)&#xA;{&#xA;    int ret;&#xA;&#xA;    // send the frame to the encoder&#xA;    ret = avcodec_send_frame(c, frame);&#xA;&#xA;    if (ret&lt;0)&#xA;    {&#xA;        fprintf(stderr, "Error sending a frame to the encoder\n");&#xA;        exit(1);&#xA;    }&#xA;&#xA;    while (ret >= 0)&#xA;    {&#xA;        AVPacket pkt = { 0 };&#xA;&#xA;        ret = avcodec_receive_packet(c, &amp;pkt);&#xA;&#xA;        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;        {&#xA;            break;&#xA;        }&#xA;        else if (ret&lt;0)&#xA;        {&#xA;            //fprintf(stderr, "Error encoding a frame: %s\n", av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;&#xA;        /* rescale output packet timestamp values from codec to stream timebase */&#xA;        av_packet_rescale_ts(&amp;pkt, c->time_base, st->time_base);&#xA;        pkt.stream_index = st->index;&#xA;&#xA;        /* Write the compressed frame to the media file. */&#xA;        //log_packet(fmt_ctx, &amp;pkt);&#xA;        ret = av_interleaved_write_frame(fmt_ctx, &amp;pkt);&#xA;        av_packet_unref(&amp;pkt);&#xA;&#xA;        if (ret &lt; 0)&#xA;        {&#xA;            //fprintf(stderr, "Error while writing output packet: %s\n", av_err2str(ret));&#xA;            exit(1);&#xA;        }&#xA;    }&#xA;&#xA;    return ret == AVERROR_EOF ? 1 : 0;&#xA;}&#xA;

    &#xA;

    How should I manipulate dts and pts so that I can achieve a video at certain frame that does not have all the frames as specified in the stream initialization ? Where should I do that manipulation ? On get_video_frame ? On write_frame ? On both ?

    &#xA;

    Am I heading in the right direction ? What am I missing ?

    &#xA;