Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (98)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Installation en mode standalone

    4 février 2011, par

    L’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
    [mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
    Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (6717)

  • How to split streaming video into pieces FFMPEG

    17 juin 2019, par danilshik

    I have a bat script (Windows) for the library Livestreamer to record the broadcast in the video

    :loop

    set day=%DATE:~0,2%
    set month=%DATE:~3,2%
    set year=%DATE:~6,4%

    set hour=%TIME:~0,2%
    set minute=%TIME:~3,2%
    set second=%TIME:~6,2%

    set YYYYMMDD=%day%_%month%_%year%_%hour%_%minute%_%second%


    streamlink --hls-live-edge 99999 --hls-segment-threads 10 --ringbuffer-size 1024M -o %YYYYMMDD%.ts https://www.twitch.tv/silvername best
    goto loop

    How can I break this video into pieces ?

    I tried to do so, but it did not work, writes the error of the arguments

    ffmpeg -i "streamlink --hls-live-edge 99999 --hls-segment-threads 10 --ringbuffer-size 1024M -o %YYYYMMDD%.ts https://www.twitch.tv/manyrin best" -f segment -segment_time 1 -vcodec copy -acodec copy "%03d.ts"
  • Creating a sequence of images from lyrics to use in ffmpeg

    19 septembre 2018, par SKS

    I’m trying to make an MP3 + Lyric -> MP4 program in python.

    I have a lyrics file like this :

    [00:00.60]Revelation, chapter 4
    [00:02.34]After these things I looked,
    [00:04.10]and behold a door was opened in heaven,
    [00:06.41]and the first voice which I heard, as it were,
    [00:08.78]of a trumpet speaking with me, said:
    [00:11.09]Come up hither,
    [00:12.16]and I will shew thee the things which must be done hereafter.
    [00:15.78]And immediately I was in the spirit:
    [00:18.03]and behold there was a throne set in heaven,
    [00:20.72]and upon the throne one sitting.
    [00:22.85]And he that sat,
    [00:23.91]was to the sight like the jasper and the sardine stone;
    [00:26.97]and there was a rainbow round about the throne,
    [00:29.16]in sight like unto an emerald.
    [00:31.35]And round about the throne were four and twenty seats;
    [00:34.85]and upon the seats, four and twenty ancients sitting,
    [00:38.03]clothed in white garments, and on their heads were crowns of gold.
    [00:41.97]And from the throne proceeded lightnings, and voices, and thunders;
    [00:46.03]and there were seven lamps burning before the throne,
    [00:48.60]which are the seven spirits of God.
    [00:51.23]And in the sight of the throne was, as it were,
    [00:53.79]a sea of glass like to crystal;
    [00:56.16]and in the midst of the throne, and round about the throne,
    [00:59.29]were four living creatures, full of eyes before and behind.
    [01:03.79]And the first living creature was like a lion:

    I’m trying to create a sequence of images from the lyrics to use into ffmpeg.

    os.system(ffmpeg_path + " -r 2 -i " + images_path + "image%1d.png -i " + audio_file + " -vcodec mpeg4 -y " + video_name)

    I tried finding out the number of images to make for each line. I’ve tried subtracting the seconds of the next line from the current line. It works but produces very inconsistent results.

    import os
    import datetime
    import time
    import math
    from PIL import Image, ImageDraw


    ffmpeg_path = os.getcwd() + "\\ffmpeg\\bin\\ffmpeg.exe"
    images_path = os.getcwd() + "\\test_output\\"
    audio_file = os.getcwd() + "\\audio.mp3"
    lyric_file = os.getcwd() + "\\lyric.lrc"

    video_name = "movie.mp4"


    def save():

       lyric_to_images()
       os.system(ffmpeg_path + " -r 2 -i " + images_path + "image%1d.png -i " + audio_file + " -vcodec mpeg4 -y " + video_name)


    def lyric_to_images():

       file  = open(lyric_file, "r")

       data = file.readlines()

       startOfLyric = True
       lstTimestamp = []

       images_to_make = 0
       from_second = 0.0
       to_second = 0.0

       for line in data:
           vTime = line[1:9] # 00:00.60

           temp = vTime.split(':')

           minute = float(temp[0])
           #a = float(temp[1].split('.'))
           #second = float((minute * 60) + int(a[0]))
           second = (minute * 60) + float(temp[1])

           lstTimestamp.append(second)

       counter = 1

       for i, second in enumerate(lstTimestamp):

           if startOfLyric is True:
               startOfLyric = False
               #first line is always 3 seconds (images to make = 3x2)
               for x in range(1, 7):
                   writeImage(data[i][10:], 'image' + str(counter))
                   counter += 1
           else:
               from_second = lstTimestamp[i-1]
               to_second = second

               difference = to_second - from_second
               images_to_make = int(difference * 2)

               for x in range(1, int(images_to_make+1)):
                   writeImage(data[i-1][10:], 'image'+str(counter))
                   counter += 1

       file.close()

    def writeImage(v_text, filename):

       img = Image.new('RGB', (480, 320), color = (73, 109, 137))

       d = ImageDraw.Draw(img)
       d.text((10,10), v_text, fill=(255,255,0))

       img.save(os.getcwd() + "\\test_output\\" + filename + ".png")


    save()

    Is there any efficient and accurate way to calculate how many images I need to create for each line ?

    Note : Whatever many images I create will have to be multiplied by 2 because I’m using -r 2 for FFmpeg (2 FPS).

  • How to decode in C a stream from this noname almost-UVC grayscale camera

    18 janvier 2019, par scriptfoo

    Edit : I found the cause. The stream always begins with something which is not a JPEG. Only after it there is a normal MJPEG stream. Interestingly, not all of the small examples of using V4L2/MJPEG decoders can divide what the camera produces properly into frames. Something called capturev4l2.c is a rare example of doing it properly. Possibly there is some detail, which decides if the camera’s bugginess is worked around or not.

    I have a noname almost-UVC-compliant camera (it fails several compatibility tests). This is a relatively cheap global shutter camera, and thus I would like to use it instead of something properly documented. It outputs what is reported (and properly played) by mplayer as

    Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
    libavcodec version 57.107.100 (external)
    Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG)

    ffprobe shows the following :

    [mjpeg @ 0x55c086dcc080] Format mjpeg detected only with low score of 25, misdetection possible!
    Input #0, mjpeg, from '/home/sc/Desktop/a.raw':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: mjpeg, yuvj422p(pc, bt470bg/unknown/unknown), 640x480, 25 tbr, 1200k tbn, 25 tbc

    But as opposed to mplayer, it is unable to play it.

    I tried decode_jpeg_raw from mjpegtools, it complains about the header, which seems to change with each captured stream. So does not look like an unwrapped stream of JPEG images.

    I thus tried 0_hello_world.c from libavcodec/libavformat, but its stops at avformat_open_input() with an error Invalid data found when processing input. A 100-frame sample file is sitting here a.raw. Do you have any idea how to determine a method of decoding it in C into anything plain bitmap ?

    The file is grayscale, does not begin with a constant value, guvcview and mplayer are the only players I know, which can decode it without artifacts...