Recherche avancée

Médias (91)

Autres articles (30)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (7078)

  • How to create a timelapse directly to MP4 with FFMPEG by adding JPEG image data every N seconds ?

    28 août 2019, par suromark

    I am trying to create a bash script with FFMPEG on an OctoPi 3d printer controller that periodically appends new frames to an MP4 timelapse output on the fly (e.g. compress/add one frame every N seconds instead of once the print is finished, processing a folder with hundreds or thousands of images, to spread the required video calculations over the whole print time where the Raspberry Pi is mostly idle anyway).

    I’d like to reduce the storage overhead for the JPEG files and the delay upon finish.

    So far I’ve set up a bash script that starts on boot (via rc.local), then monitors the Octoprint API for signs of activity every 5 seconds (while true ... sleep(5) ) with a second watchdog script. Once the progress value is non-false, and the print head temperature is above 120 °C, the script polls the localhost URL of the webcam for a JPEG file that is timestamped and stored in a folder with the current print job’s name (taken from the API). Once the print is completed, I scp the files to my laptop where I use FFMPEG to convert them to MP4.

    Ideally I’d like to keep the monitoring script behaving as it does, but instead of writing out JPEGs I’d like to pipe the data into FFMPEG (once every N seconds) upon which FFMPEG should process it as new single frame of its stream-in-progress, writing out the data once a new GOP is complete (or finish the last GOP in the buffer once a kill signal arrives).

    For this I most likely need to start FFMPEG from the main loop once a new job starts (to set the output file) then establish some pipe structure to send the JPEG frames, I guess ?

    So far I’ve not managed to find/google any example for this (or a similar workflow), though I think it’s not that exotic an use case...?

    Edit to add :
    This is my current FFMPEG bash command :

    ffmpeg \
    -framerate 30 \
    -pattern_type glob \
    -i '*.jpg' \
    -c:v libx264 \
    -vf "normalize=blackpt=black:whitept=white:smoothing=50" \
    -pix_fmt yuv420p \
    ../"$outname""$outdate"_lapse.mp4

    So far, I’ve managed to get FFMPEG to read from a named pipe, but it saves the output and stops after the first frame is through the pipe. I’d like to tell it to keep reading/waiting from the pipe until it receives a kill (or other) signal :

    mkfifo /tmp/testpipe

    ffmpeg -y -framerate 30 -pattern_type glob -f image2pipe -i /tmp/testpipe -c:v libx264 -pix_fmt yuv420p "pipetestout.mp4"

    and from another terminal :

    cat some-image.jpg > /tmp/testpipe

    I think I’m close (but no cigar yet) ...

  • Making my Discord Bot automatically play music from WAV on loop

    5 décembre 2022, par Mativ9

    So I was trying to make a Discord Bot in Python, which would atomatically join a voice channel and play my own music from a list in a loop. So far it's joining the channel, shuffling the list so the music is on random, but when I try to write a code so after one song it will play the next one it crushes and doesn't play anything (tho it's joining the channel)

    


    import discord
import random
from discord.ext import commands
from discord import FFmpegPCMAudio

#playlist as a list
queue = [FFmpegPCMAudio('Iceland1.wav'), FFmpegPCMAudio('Iceland2.wav'), FFmpegPCMAudio('Iceland3.wav'), FFmpegPCMAudio('Iceland4.wav'),
         FFmpegPCMAudio('Iceland5.wav'), FFmpegPCMAudio('Iceland6.wav'), FFmpegPCMAudio('Iceland7.wav'), FFmpegPCMAudio('Iceland8.wav'),
         FFmpegPCMAudio('Iceland9.wav'), FFmpegPCMAudio('Iceland10.wav'), FFmpegPCMAudio('Norway1.wav'), FFmpegPCMAudio('Norway2.wav'),
         FFmpegPCMAudio('Norway3.wav'), FFmpegPCMAudio('Norway4.wav'), FFmpegPCMAudio('Norway5.wav'), FFmpegPCMAudio('Norway6.wav'),
         FFmpegPCMAudio('Norway7.wav'), FFmpegPCMAudio('Norway8.wav'), FFmpegPCMAudio('Norway9.wav'), FFmpegPCMAudio('Norway10.wav'),
         FFmpegPCMAudio('Norway11.wav'), FFmpegPCMAudio('Presents1.wav'), FFmpegPCMAudio('Presents2.wav'), FFmpegPCMAudio('Presents3.wav'),
         FFmpegPCMAudio('Presents4.wav'), FFmpegPCMAudio('Presents5.wav'), FFmpegPCMAudio('Presents6.wav'), FFmpegPCMAudio('Presents7.wav'),
         FFmpegPCMAudio('Presents8.wav'), FFmpegPCMAudio('Presents9.wav'), FFmpegPCMAudio('Presents10.wav'), FFmpegPCMAudio('Autumn1.wav'),
         FFmpegPCMAudio('Autumn2.wav'), FFmpegPCMAudio('Autumn3.wav'), FFmpegPCMAudio('Autumn4.wav'), FFmpegPCMAudio('Autumn5.wav'),
         FFmpegPCMAudio('Autumn6.wav'), FFmpegPCMAudio('Autumn7.wav'), FFmpegPCMAudio('Autumn8.wav'), FFmpegPCMAudio('Covers1.wav'),
         FFmpegPCMAudio('Covers2.wav'), FFmpegPCMAudio('Covers3.wav'), FFmpegPCMAudio('Covers4.wav'), FFmpegPCMAudio('Covers5.wav'),
         FFmpegPCMAudio('Covers6.wav'), FFmpegPCMAudio('Covers7.wav'), FFmpegPCMAudio('Covers8.wav'), FFmpegPCMAudio('Covers9.wav'),
         FFmpegPCMAudio('Covers10.wav'), FFmpegPCMAudio('Covers11.wav'), FFmpegPCMAudio('Covers12.wav')]

intents = discord.Intents.default()
intents.message_content = True
client = commands.Bot(command_prefix='>', intents=intents)

@client.event
async def on_ready():
    global voice
    print("The Matt Bot is ready")
    print("--------------------------")
    await client.change_presence(activity=discord.Game('Matt Krupa')) #makes my bot play Matt Krupa
    channel = client.get_channel(thechannelid) #geting channel ID
    voice = await channel.connect() #connecting to channel
    random.shuffle(queue) #randomazing the playlist
    def after_song(): #moving the first song to the end so its on loop, and playling the next one
        queue.append(queue[0])
        del queue[0]
        player = await voice.play(queue[0], after=await after_song())
    player = await voice.play(queue[0], after=await after_song()) #plays song from the playlist, after the song doing the after_song() function

client.run(mytokenidontwanttoshowitsry)


    


    I wanted it to play all the songs on the infinite loop, i can't find how to correctly detect the end of a song...

    


  • FFMPEG's sws_scale for converting RGB to YUV420 image is extremely slow

    6 novembre 2020, par jackey balwani

    I am creating a basic application for recording screen activity using FFMPEG Library calls.
My Program flow is as below -
Fetch input data from framebuffer (in RGB format) —> Converting to YUV420 format and scaling to desirable resolution -> Encode the frame and Send to Muxer for MPEG2 convesrion.

    


    Input data to my program is raw frambuffer data in RGB Format. I am using FFMPEG's sws_scale api for converting RGB to YUV420 image for encoding.
Below is the code for converting the pixel format-

    


    static int get_frame_buffer_data(AVFrame *pict, int frame_index, int width,
                      int height, enum AVPixelFormat pix_fmt, char *rawFrame)
 {
     struct SwsContext *sws_ctx = NULL;
     int ret = 0;
     rfbLog("[%s:%d]before conv_frame alloc:::pix_fmt = %d width = %d height = %d\n",__func__,__LINE__,pix_fmt,width,height);
     //picture->data[0] = (uint8_t*)&frameBuffer[0];
    picture->data[0] = (uint8_t*)&rawFrame[0];
     sws_ctx = sws_getCachedContext(sws_ctx,picture->width, picture->height, picture->format,width, height, pix_fmt,SWS_BICUBIC, NU       LL, NULL, NULL);
     if (!sws_ctx)
     {
         rfbLog("[%s:%d]Cannot initialize the conversion context\n",__func__,__LINE__);
         av_frame_free(&picture);
         sws_freeContext(sws_ctx);
         return -1;
     }
     rfbLog("[%s:%d]before sws_scale::: picture->linesize[0]=%d picture->height=%d pict->linesize = %d\n",__func__,__LINE__,picture       ->linesize[0],picture->height,pict->linesize[0]);
     ret = sws_scale(sws_ctx, (const uint8_t * const *)picture->data, picture->linesize, 0, picture->height, pict->data, pict->line       size);
     rfbLog("[%s:%d]after sws_scale::: picture->linesize[0]=%d picture->height=%d pict->linesize = %d returned height = %d\n",__fun       c__,__LINE__, picture->linesize[0],picture->height,pict->linesize[0], ret);
     if (ret < 0)
     {
         rfbLog("[%s:%d]could not convert to yuv420\n",__func__,__LINE__);
         sws_freeContext(sws_ctx);
         av_frame_free(&picture);
         return -1;
     }
     sws_freeContext(sws_ctx);
     return 0;
 }


    


    I noticed that adding this code is making the application very slow. Profiling data gave me the information that sws_scale is taking very much time for converting the data to YUV420 . It is taking almost around 200 ms, which is making CPU time utilization very high and making my application unresponsive sometime.

    


    Can we optimize this or use any alternative solution for conversion and how can we achieve that ?