Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (98)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Dépôt de média et thèmes par FTP

    31 mai 2013, par

    L’outil MédiaSPIP traite aussi les média transférés par la voie FTP. Si vous préférez déposer par cette voie, récupérez les identifiants d’accès vers votre site MédiaSPIP et utilisez votre client FTP favori.
    Vous trouverez dès le départ les dossiers suivants dans votre espace FTP : config/ : dossier de configuration du site IMG/ : dossier des média déjà traités et en ligne sur le site local/ : répertoire cache du site web themes/ : les thèmes ou les feuilles de style personnalisées tmp/ : dossier de travail (...)

Sur d’autres sites (12357)

  • What could be serve as a nullptr in cython wrapper for C++ uint8_t multidimensional array ?

    20 juillet 2020, par yose93

    I've stuck with solving of one problem. I have to fill C++ structure with yuv420p frame data in my cython wrapper :

    


    #define FR_PLANE_COUNT_MAX 8

typedef struct fr_frame_s {
    int format = 0;

    int width = 0;
    int height = 0;

    uint8_t* data[FR_PLANE_COUNT_MAX];

    int     stride[FR_PLANE_COUNT_MAX];

    int     size[FR_PLANE_COUNT_MAX];

    long long time = 0;

} fr_frame_t;




    


    Where data is just a multidimensional array with length of 8. In this array first three elements to be y, u and v byte multidimensional arrays, and the rest are just nullptr values. The next chunk of code which I need to implement on pure python just to fill the structure with according data of the above structure itself :

    


    bool VideoCapture::ConvertFrame(const AVFrame *src, fr_frame_t &dst)
{
    if(src != NULL)
    {
        for (size_t i = 0; i < FR_PLANE_COUNT_MAX; ++i)
        {
            if (src->data[i] != nullptr)
            {
                const int line = src->linesize[i];
                const int size = i == 0 ? line * src->height : int(line * (src->height / 2.0));
                dst.data[i] = (uint8_t*)malloc(size);
                memcpy(dst.data[i], src->data[i], size);

                //dst.data[i] = src->data[i];
                dst.size[i] = size;
                dst.stride[i] = src->linesize[i];
            }else{
                dst.data[i] = nullptr;
                dst.size[i] = 0;
                dst.stride[i] = 0;
            }
        }



    


    Here all the values after y, u, v arrays must be just of nullptr as it seems. So, what I can use as nullptr to fill np.ndarray after y, u, v.

    


    And my python code :

    


    def _get_read_frames(
        self,
        video: pathlib.PosixPath,
    ) -> Generator[Tuple[Union[teyefr.MetadataImage, float]], None, None]:
        """Video frames reader."""
        self._cap = cv2.VideoCapture(str(video))
        self._total_frames = self._cap.get(cv2.CAP_PROP_FRAME_COUNT)
        self._fps = math.ceil(self._cap.get(cv2.CAP_PROP_FPS))
        self._duration = self._total_frames / self._fps

        while(self._cap.isOpened()):
            _, frame = self._cap.read()

            if frame is None:
                break
            
            yuv420_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2YUV)

            self._process_yuv420_frame(yuv420_frame)

        self._cap.release()
    
    def _process_yuv420_frame(self, yuv420_frame: np.ndarray) -> None:
        """To fill `self._fr_frames` list.

        Splits already converted frame into 3-channels y, u, v
        and takes all required data to fill `FRFrame` and push it.
        """
        data = np.array([])
        stride = np.array([])
        size = np.array([])
        frame_data = {}.fromkeys(FRFrame.__dataclass_fields__.keys())

        channels = (y, u, v) = cv2.split(yuv420_frame)

        for i in range(FR_PLANE_COUNT_MAX):
            if i < len(channels):
                np.concatenate(data, channels[i])
            else:
                np.concatenate(data, np.array([]))
            
        frame_data['height'], frame_data['width'], _ = yuv420_frame.shape


    


    Please advise.

    


  • C# process FFMPEG output from standard out (pipe) [duplicate]

    30 janvier 2018, par Alexander Streckov

    This question already has an answer here :

    I want to extract the current image from the FFMPEG standard output and show it on a C# form. The stream source itself is a h264 raw data which converted into image and piped to the standard output. Here is my code, but I have no idea how to process the output (maybe MemoryStream) :

    public Process ffproc = new Process();
    private void xxxFFplay()
    {
       ffproc.StartInfo.FileName = "ffmpeg.exe";
       ffproc.StartInfo.Arguments = "-y -i udp://127.0.0.1:8888/ -q:v 1 -huffman optimal -update 1 -f mjpeg -";

       ffproc.StartInfo.CreateNoWindow = true;
       ffproc.StartInfo.RedirectStandardOutput = true;
       ffproc.StartInfo.UseShellExecute = false;

       ffproc.EnableRaisingEvents = true;
       ffproc.OutputDataReceived += (o, e) => Debug.WriteLine(e.Data ?? "NULL", "ffplay");
       fproc.ErrorDataReceived += (o, e) => Debug.WriteLine(e.Data ?? "NULL", "ffplay");
       ffproc.Exited += (o, e) => Debug.WriteLine("Exited", "ffplay");
       ffproc.Start();

       worker = new BackgroundWorker();
       worker.DoWork += worker_DoWork;
       worker.WorkerReportsProgress = true;
       worker.ProgressChanged += worker_ProgressChanged;
       worker.RunWorkerAsync();
    }

    public void worker_DoWork(object sender, DoWorkEventArgs e)
    {
       try
       {
           var internalWorker = sender as BackgroundWorker;
           Process p = e.Argument as Process;
           buffer = new MemoryStream();
           bufferWriter = new BinaryWriter(buffer);
           using (var reader = new BinaryReader(p.StandardOutput.BaseStream))
           {
               while (true)
               {
                   bufferWriter.Write(1);
                   var img = (Bitmap)Image.FromStream(buffer);
                   pictureBox1.Image = img;
                   //get the jpg image
               }
            }
        }
        catch (Exception ex)
        {
              // Log the error, continue processing the live stream
        }
    }

    Any help would be appreciated !

  • Moviepy : subclip fails when iterating through CSV

    5 mai 2016, par user3316291

    I am trying to randomize the order of video clips based on timing from a CSV file and then reassemble the randomized clips into a single video. However, I am receiving an error in the loop that iterates through each clip timing to create the subclip.

    Here is what my CSV looks like :

    00:00:32.18,00:00:52.10,1
    00:00:52.11,00:00:56.09,2
    00:00:56.10,00:00:58.15,3
    00:00:58.16,00:01:05.16,4
    00:01:05.17,00:01:16.04,5

    column 1 is clip onset
    column 2 is clip offset
    column 3 is scene number that I use to randomize

    Here is my code :

    import os
    import csv
    import numpy as np
    from moviepy.editor import *

    f = open('SceneCuts.csv')
    csv_f = csv.reader(f)

    scenes = []
    ons = []
    offs = []
    for row in csv_f:
       ons.append(row[0])
       offs.append(row[1])
       scenes.append(row[2])

    r_scene = scenes
    np.random.seed(1000)
    np.random.shuffle(r_scene)
    r_scene = map(int, r_scene)

    clip = VideoFileClip("FullVideo.m4v")

    temp = []
    for row in r_scene:
       print(row)
       temp.append(clip.subclip(ons[row-1], offs[row-1]))

    catclip = concatenate_videoclips(temp)
    catclip_resize = catclip.resize((1024,576))
    catclip_resize.write_videofile("RandomVideo.mp4")

    Here is the output, error occurs at line 29 (temp.append)

    File "/Users/Dustin/anaconda/lib/python2.7/site-packages/moviepy/video/io/ffmpeg_reader.py", line 87, in initialize
    self.proc = sp.Popen(cmd, **popen_params)

    File "/Users/Dustin/anaconda/lib/python2.7/subprocess.py", line 710, in __init__
    errread, errwrite)

    File "/Users/Dustin/anaconda/lib/python2.7/subprocess.py", line 1316, in _execute_child
    data = _eintr_retry_call(os.read, errpipe_read, 1048576)

    File "/Users/Dustin/anaconda/lib/python2.7/subprocess.py", line 476, in _eintr_retry_call
    return func(*args)

    OSError: [Errno 22] Invalid argument

    Based on my research, it appears to be something regarding child processes and subprocess.Popen, but I can’t figure it out. Thanks !

    EDIT to add new information :
    I have been running the above script in Spyder (anaconda) and receiving the above errors. However, when I run from a terminal or sublime (cmd+b), the code "works". It runs and I do not get the above error, however, the resulting video file is a mess. There are multiple conflicting audio tracks that shouldn’t be there. I am not sure what is going on in Spyder, but I’d love to know. Also, I still need to fix the audio problem.