Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (73)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

Sur d’autres sites (9137)

  • Combine multiple videos into one

    10 janvier 2012, par StackedCrooked

    I have three videos :

    • a lecture that was filmed with a video camera
    • a video of the desktop capture of the computer used in the lecture
    • and the video of the whiteboard

    I want to create a final video with those three components taking up a certain region of the screen.

    Is open-source software that would allow me to do this (mencoder, ffmpeg, virtualdub..) ? Which do you recommend ?

    Or is there a C/C++ API that would enable me to create something like that programmatically ?

    Edit
    There will be multiple recorded lectures in the future. This means that I need a generic/automated solution.

    I'm currently checking out if I could write an application with GStreamer to do this job. Any comments on that ?

    Solved !
    I succeeded in doing this with GStreamer's videomixer element. I use the gst-launch syntax to create a pipeline and then load it with gst_parse_launch. It's a really productive way to implement complex pipelines.

    Here's a pipeline that takes two incoming video streams and a logo image, blends them into one stream and the duplicates it so that it simultaneously displayed and saved to disk.

     desktop. ! queue
              ! ffmpegcolorspace
              ! videoscale
              ! video/x-raw-yuv,width=640,height=480
              ! videobox right=-320
              ! ffmpegcolorspace
              ! vmix.sink_0
     webcam. ! queue
             ! ffmpegcolorspace
             ! videoscale
             ! video/x-raw-yuv,width=320,height=240
             ! vmix.sink_1
     logo. ! queue
           ! jpegdec
           ! ffmpegcolorspace
           ! videoscale
           ! video/x-raw-yuv,width=320,height=240
           ! vmix.sink_2
     vmix. ! t.
     t. ! queue
        ! ffmpegcolorspace
        ! ffenc_mpeg2video
        ! filesink location="recording.mpg"
     t. ! queue
        ! ffmpegcolorspace
        ! dshowvideosink
     videotestsrc name="desktop"
     videotestsrc name="webcam"
     multifilesrc name="logo" location="logo.jpg"
     videomixer name=vmix
                sink_0::xpos=0 sink_0::ypos=0 sink_0::zorder=0
                sink_1::xpos=640 sink_1::ypos=0 sink_1::zorder=1
                sink_2::xpos=640 sink_2::ypos=240 sink_2::zorder=2
     tee name="t"
  • I need to create a streaming app Where videos store in aws S3

    27 juillet 2023, par abuzar zaidi

    I need to create a video streaming app. where my videos will store in S3 and the video will stream from the backend. Right now I am trying to use ffmpeg in the backend but it does not work properly.

    


    what am I doing wrong in this code ? And if ffmpeg not support streaming video that store in aws s3 please suggest other options.

    


    Backend

    


    const express = require('express');
const aws = require('aws-sdk');
const ffmpeg = require('fluent-ffmpeg');
const cors = require('cors'); 
const app = express();

//   Set up AWS credentials
aws.config.update({
accessKeyId: '#######################',
secretAccessKey: '###############',
region: '###############3',
});

const s3 = new aws.S3();
app.use(cors());

app.get('/stream', (req, res) => {
const bucketName = '#######';
const key = '##############'; // Replace with the key/path of the video file in the S3 bucket
const params = { Bucket: bucketName, Key: key };
const videoStream = s3.getObject(params).createReadStream();

// Transcode to HLS format
 const hlsStream = ffmpeg(videoStream)
.format('hls')
.outputOptions([
  '-hls_time 10',
  '-hls_list_size 0',
  '-hls_segment_filename segments/segment%d.ts',
 ])
.pipe(res);

// Transcode to DASH format and pipe the output to the response
ffmpeg(videoStream)
.format('dash')
.outputOptions([
  '-init_seg_name init-stream$RepresentationID$.mp4',
  '-media_seg_name chunk-stream$RepresentationID$-$Number%05d$.mp4',
 ])
.output(res)
.run();
});

const port = 5000;
app.listen(port, () => {
console.log(`Server running on http://localhost:${port}`);
});


    


    Frontend

    


        import React from &#x27;react&#x27;;&#xA;&#xA;    const App = () => {&#xA;     const videoUrl = &#x27;http://localhost:3000/api/playlist/runPlaylist/6c3e7af45a3b8a5caf2fef17a42ef9a0&#x27;; //          Replace with your backend URL&#xA;Please list down solution or option i can use here&#xA;    const videoUrl = &#x27;http://localhost:5000/stream&#x27;;&#xA;         return (&#xA;      <div>&#xA;      <h1>Video Streaming Example</h1>&#xA;      <video controls="controls">&#xA;        <source src="{videoUrl}" type="video/mp4"></source>&#xA;        Your browser does not support the video tag.&#xA;      </video>&#xA;    </div>&#xA;     );&#xA;     };&#xA;&#xA;    export default App;&#xA;

    &#xA;

  • Visualizing Call Graphs Using Gephi

    1er septembre 2014, par Multimedia Mike — General

    When I was at university studying computer science, I took a basic chemistry course. During an accompanying lab, the teaching assistant chatted me up and asked about my major. He then said, “Computer science ? Well, that’s just typing stuff, right ?”

    My impulsive retort : “Sure, and chemistry is just about mixing together liquids and coming up with different colored liquids, as seen on the cover of my high school chemistry textbook, right ?”


    Chemistry fun

    In fact, pure computer science has precious little to do with typing (as is joked in CS circles, computer science is about computers in the same way that astronomy is about telescopes). However, people who study computer science often pursue careers as programmers, or to put it in fancier professional language, software engineers.

    So, what’s a software engineer’s job ? Isn’t it just typing ? That’s where I’ve been going with this overly long setup. After thinking about it for long enough, I like to say that a software engineer’s trade is managing complexity.

    A few years ago, I discovered Gephi, an open source tool for graph and data visualization. It looked neat but I didn’t have much use for it at the time. Recently, however, I was trying to get a better handle on a large codebase. I.e., I was trying to manage the project’s complexity. And then I thought of Gephi again.

    Prior Work
    One way to get a grip on a large C codebase is to instrument it for profiling and extract details from the profiler. On Linux systems, this means compiling and linking the code using the -pg flag. After running the executable, there will be a gmon.out file which is post-processed using the gprof command.

    GNU software development tools have a reputation for being rather powerful and flexible, but also extremely raw. This first hit home when I was learning how to use the GNU tool for code coverage — gcov — and the way it outputs very raw data that you need to massage with other tools in order to get really useful intelligence.

    And so it is with gprof output. The output gives you a list of functions sorted by the amount of processing time spent in each. Then it gives you a flattened call tree. This is arranged as “during the profiled executions, function c was called by functions a and b and called functions d, e, and f ; function d was called by function c and called functions g and h”.

    How can this call tree data be represented in a more instructive manner that is easier to navigate ? My first impulse (and I don’t think I’m alone in this) is to convert the gprof call tree into a representation suitable for interpretation by Graphviz. Unfortunately, doing so tends to generate some enormous and unwieldy static images.

    Feeding gprof Data To Gephi
    I learned of Gephi a few years ago and recalled it when I developed an interest in gaining better perspective on a large base of alien C code. To understand what this codebase is doing for a particular use case, instrument it with gprof, gather execution data, and then study the code paths.

    How could I feed the gprof data into Gephi ? Gephi supports numerous graphing formats including an XML-based format named GEXF.

    Thus, the challenge becomes converting gprof output to GEXF.

    Which I did.

    Demonstration
    I have been absent from FFmpeg development for a long time, which is a pity because a lot of interesting development has occurred over the last 2-3 years after a troubling period of stagnation. I know that 2 big video codec developments have been HEVC (next in the line of MPEG codecs) and VP9 (heir to VP8’s throne). FFmpeg implements them both now.

    I decided I wanted to study the code flow of VP9. So I got the latest FFmpeg code from git and built it using the options "--extra-cflags=-pg --extra-ldflags=-pg". Annoyingly, I also needed to specify "--disable-asm" because gcc complains of some register allocation snafus when compiling inline ASM in profiling mode (and this is on x86_64). No matter ; ASM isn’t necessary for understanding overall code flow.

    After compiling, the binary ‘ffmpeg_g’ will have symbols and be instrumented for profiling. I grabbed a sample from this VP9 test vector set and went to work.

    ./ffmpeg_g -i vp90-2-00-quantizer-00.webm -f null /dev/null
    gprof ./ffmpeg_g > vp9decode.txt
    convert-gprof-to-gexf.py vp9decode.txt > /bigdisk/vp9decode.gexf
    

    Gephi loads vp9decode.gexf with no problem. Using Gephi, however, can be a bit challenging if one is not versed in any data exploration jargon. I recommend this Gephi getting starting guide in slide deck form. Here’s what the default graph looks like :


    gprof-ffmpeg-gephi-1

    Not very pretty or helpful. BTW, that beefy arrow running from mid-top to lower-right is the call from decode_coeffs_b -> iwht_iwht_4x4_add_c. There were 18774 from the former to the latter in this execution. Right now, the edge thicknesses correlate to number of calls between the nodes, which I’m not sure is the best representation.

    Following the tutorial slide deck, I at least learned how to enable the node labels (function symbols in this case) and apply a layout algorithm. The tutorial shows the force atlas layout. Here’s what the node neighborhood looks like for probing file type :


    gprof-ffmpeg-gephi-2

    Okay, so that’s not especially surprising– avprobe_input_format3 calls all of the *_probe functions in order to automatically determine input type. Let’s find that decode_coeffs_b function and see what its neighborhood looks like :


    gprof-ffmpeg-gephi-3

    That’s not very useful. Perhaps another algorithm might help. I select the Fruchterman–Reingold algorithm instead and get a slightly more coherent representation of the decoding node neighborhood :


    gprof-ffmpeg-gephi-4

    Further Work
    Obviously, I’m just getting started with this data exploration topic. One thing I would really appreciate in such a tool is the ability to interactively travel the graph since that’s what I’m really hoping to get out of this experiment– watching the code flows.

    Perhaps someone else can find better use cases for visualizing call graph data. Thus, I have published the source code for this tool at Github.