Recherche avancée

Médias (21)

Mot : - Tags -/Nine Inch Nails

Autres articles (63)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (6855)

  • ffmpeg frame mapping to rgb32 dynamic resource using directx memcpy

    9 janvier 2019, par Sang Hun Kim

    I have been trying to solve the problem for a month with googling.
    But Now I have to ask for help here.

    I want to render using ffmpeg decoded frame.
    I’m trying to decode video and I got the frame(I guess it format YUV444P).
    So conversion 420P and RGB32 conversion again.(RGB is same way, just change format)

    AVFrame* YUVFrame = av_frame_alloc();

    SwsContext * swsContext = sws_getContext(pVFrame->width, pVFrame->height, pVideoCodecCtx->pix_fmt,
       pVFrame->width, pVFrame->height, AV_PIX_FMT_YUV420P, SWS_FAST_BILINEAR, 0, 0, 0);
    if (swsContext == NULL) {
       return false;
    }
    *YUVFrame = *pVFrame;
    YUVFrame->data[0] = pVFrame->data[0];   YUVFrame->linesize[0] = pVFrame->linesize[0];
    YUVFrame->data[1] = pVFrame->data[1];   YUVFrame->linesize[1] = pVFrame->linesize[1];
    YUVFrame->data[2] = pVFrame->data[2];   YUVFrame->linesize[2] = pVFrame->linesize[2];
    YUVFrame->width = pVFrame->width;       YUVFrame->height = pVFrame->height;


    sws_scale(swsContext, pVFrame->data, pVFrame->linesize, 0, (int)pVFrame->height, YUVFrame->data, YUVFrame->linesize);

    if (YUVFrame == NULL) {
       av_frame_unref(YUVFrame);
       return false;
    }

    and using frame, I try to render frame with DX2D texture.

    ZeroMemory(&TextureDesc, sizeof(TextureDesc));

    TextureDesc.Height = pFrame->height;
    TextureDesc.Width = pFrame->width;
    TextureDesc.MipLevels = 1;
    TextureDesc.ArraySize = 1;
    TextureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;            //size 16
    TextureDesc.SampleDesc.Count = 1;
    TextureDesc.SampleDesc.Quality = 0;
    TextureDesc.Usage = D3D11_USAGE_DYNAMIC;
    TextureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
    TextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
    TextureDesc.MiscFlags = 0;

    result = m_device->CreateTexture2D(&TextureDesc, NULL, &m_2DTex);
    if (FAILED(result))     return false;

    ShaderResourceViewDesc.Format = TextureDesc.Format;
    ShaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
    ShaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
    ShaderResourceViewDesc.Texture2D.MipLevels = 1;

    also mapping (cause usage DYNAMIC, CPU_ACCESS_WRITE)

       D3D11_MAPPED_SUBRESOURCE S_mappedResource_tt = { 0, };

    ZeroMemory(&S_mappedResource_tt, sizeof(D3D11_MAPPED_SUBRESOURCE));
    DWORD   Stride = pFrame->linesize[0];

    result = m_deviceContext->Map(m_2DTex, 0, D3D11_MAP_WRITE_DISCARD, 0, &S_mappedResource_tt);
    if (FAILED(result)) return false;
    BYTE* mappedData = reinterpret_cast<byte>(S_mappedResource_tt.pData);
    for (UINT i = 0; i &lt; 3; ++i) {
       memcpy(mappedData, pFrame->data, Stride * 3);
       mappedData += S_mappedResource_tt.RowPitch;
       *pFrame->data += Stride * 3;
    }

    m_deviceContext->Unmap(m_2DTex, 0);

    result = m_device->CreateShaderResourceView(m_2DTex, &amp;ShaderResourceViewDesc, &amp;m_ShaderResourceView);
    if (FAILED(result))     return false;

       m_deviceContext->PSSetShaderResources(0, 1, &amp;m_ShaderResourceView);
    </byte>

    but it shows me just black screen(nothing render).
    I guess it’s wrong memcpy size.
    The biggest problem is that I don’t know what is the problem.

    Question 1 :
    Can it be decoded in the right way and converted using ffmpeg ? (I think it’s right, but it just guesses)

    Question 2 :
    It has any problem creating 2D texture for mapping ?

    Question 3 :
    What size of the memcpy parameters should I enter (related to formatting) ?

    Thank U for watching, Please reply.

  • How to split streaming video into pieces FFMPEG

    17 juin 2019, par danilshik

    I have a bat script (Windows) for the library Livestreamer to record the broadcast in the video

    :loop

    set day=%DATE:~0,2%
    set month=%DATE:~3,2%
    set year=%DATE:~6,4%

    set hour=%TIME:~0,2%
    set minute=%TIME:~3,2%
    set second=%TIME:~6,2%

    set YYYYMMDD=%day%_%month%_%year%_%hour%_%minute%_%second%


    streamlink --hls-live-edge 99999 --hls-segment-threads 10 --ringbuffer-size 1024M -o %YYYYMMDD%.ts https://www.twitch.tv/silvername best
    goto loop

    How can I break this video into pieces ?

    I tried to do so, but it did not work, writes the error of the arguments

    ffmpeg -i "streamlink --hls-live-edge 99999 --hls-segment-threads 10 --ringbuffer-size 1024M -o %YYYYMMDD%.ts https://www.twitch.tv/manyrin best" -f segment -segment_time 1 -vcodec copy -acodec copy "%03d.ts"
  • Creating a sequence of images from lyrics to use in ffmpeg

    19 septembre 2018, par SKS

    I’m trying to make an MP3 + Lyric -> MP4 program in python.

    I have a lyrics file like this :

    [00:00.60]Revelation, chapter 4
    [00:02.34]After these things I looked,
    [00:04.10]and behold a door was opened in heaven,
    [00:06.41]and the first voice which I heard, as it were,
    [00:08.78]of a trumpet speaking with me, said:
    [00:11.09]Come up hither,
    [00:12.16]and I will shew thee the things which must be done hereafter.
    [00:15.78]And immediately I was in the spirit:
    [00:18.03]and behold there was a throne set in heaven,
    [00:20.72]and upon the throne one sitting.
    [00:22.85]And he that sat,
    [00:23.91]was to the sight like the jasper and the sardine stone;
    [00:26.97]and there was a rainbow round about the throne,
    [00:29.16]in sight like unto an emerald.
    [00:31.35]And round about the throne were four and twenty seats;
    [00:34.85]and upon the seats, four and twenty ancients sitting,
    [00:38.03]clothed in white garments, and on their heads were crowns of gold.
    [00:41.97]And from the throne proceeded lightnings, and voices, and thunders;
    [00:46.03]and there were seven lamps burning before the throne,
    [00:48.60]which are the seven spirits of God.
    [00:51.23]And in the sight of the throne was, as it were,
    [00:53.79]a sea of glass like to crystal;
    [00:56.16]and in the midst of the throne, and round about the throne,
    [00:59.29]were four living creatures, full of eyes before and behind.
    [01:03.79]And the first living creature was like a lion:

    I’m trying to create a sequence of images from the lyrics to use into ffmpeg.

    os.system(ffmpeg_path + " -r 2 -i " + images_path + "image%1d.png -i " + audio_file + " -vcodec mpeg4 -y " + video_name)

    I tried finding out the number of images to make for each line. I’ve tried subtracting the seconds of the next line from the current line. It works but produces very inconsistent results.

    import os
    import datetime
    import time
    import math
    from PIL import Image, ImageDraw


    ffmpeg_path = os.getcwd() + "\\ffmpeg\\bin\\ffmpeg.exe"
    images_path = os.getcwd() + "\\test_output\\"
    audio_file = os.getcwd() + "\\audio.mp3"
    lyric_file = os.getcwd() + "\\lyric.lrc"

    video_name = "movie.mp4"


    def save():

       lyric_to_images()
       os.system(ffmpeg_path + " -r 2 -i " + images_path + "image%1d.png -i " + audio_file + " -vcodec mpeg4 -y " + video_name)


    def lyric_to_images():

       file  = open(lyric_file, "r")

       data = file.readlines()

       startOfLyric = True
       lstTimestamp = []

       images_to_make = 0
       from_second = 0.0
       to_second = 0.0

       for line in data:
           vTime = line[1:9] # 00:00.60

           temp = vTime.split(':')

           minute = float(temp[0])
           #a = float(temp[1].split('.'))
           #second = float((minute * 60) + int(a[0]))
           second = (minute * 60) + float(temp[1])

           lstTimestamp.append(second)

       counter = 1

       for i, second in enumerate(lstTimestamp):

           if startOfLyric is True:
               startOfLyric = False
               #first line is always 3 seconds (images to make = 3x2)
               for x in range(1, 7):
                   writeImage(data[i][10:], 'image' + str(counter))
                   counter += 1
           else:
               from_second = lstTimestamp[i-1]
               to_second = second

               difference = to_second - from_second
               images_to_make = int(difference * 2)

               for x in range(1, int(images_to_make+1)):
                   writeImage(data[i-1][10:], 'image'+str(counter))
                   counter += 1

       file.close()

    def writeImage(v_text, filename):

       img = Image.new('RGB', (480, 320), color = (73, 109, 137))

       d = ImageDraw.Draw(img)
       d.text((10,10), v_text, fill=(255,255,0))

       img.save(os.getcwd() + "\\test_output\\" + filename + ".png")


    save()

    Is there any efficient and accurate way to calculate how many images I need to create for each line ?

    Note : Whatever many images I create will have to be multiplied by 2 because I’m using -r 2 for FFmpeg (2 FPS).