Recherche avancée

Médias (1)

Mot : - Tags -/blender

Autres articles (112)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

Sur d’autres sites (5412)

  • Are there any alternatives to SharedArrayBuffer, or methods for video editing in a web browser ?

    26 juillet 2023, par Govinda Regmi

    I'm working on a web-based video editing application using ffmeg that heavily relies on SharedArrayBuffer. Unfortunately, I've encountered a roadblock with the "Cross-Origin-Embedder-Policy : require-corp | credentialless" and "Cross-Origin-Opener-Policy : same-origin" headers. While these headers allow the usage of SharedArrayBuffer, they restrict other essential features, such as rendering images from an s3 bucket and script of TinyMce text editor.

    


    I am trying to achieve
video editor like this

    


    I am using "next" : "12.1.6" and
I tried to implement ffmeg like this :

    


    import { useEffect, useState } from "react";&#xA;&#xA;import { useDebounce } from "use-debounce";&#xA;import { createFFmpeg, fetchFile } from "@ffmpeg/ffmpeg";&#xA;&#xA;import styles from "../videoEditor.module.scss";&#xA;import RangeInput from "../range-input/RangeInput";&#xA;import * as helpers from "../../../../utils/videoHelpers";&#xA;&#xA;const FF = createFFmpeg({&#xA;    log: true,&#xA;    corePath: "https://unpkg.com/@ffmpeg/core@0.10.0/dist/ffmpeg-core.js",&#xA;});&#xA;&#xA;(async function () {&#xA;    await FF.load();&#xA;})();&#xA;&#xA;export const VideoTrimmer = ({&#xA;    videoFile,&#xA;    trimmedVideoFile,&#xA;    isConfirmClicked,&#xA;    setTrimmedVideoFile,&#xA;    onConfirmClickHandler,&#xA;}) => {&#xA;    const [URL, setURL] = useState([]);&#xA;    const [thumbNails, setThumbNails] = useState([]);&#xA;    const [videoMeta, setVideoMeta] = useState(null);&#xA;    const [inputVideoFile, setInputVideoFile] = useState(null);&#xA;    const [thumbnailIsProcessing, setThumbnailIsProcessing] = useState(false);&#xA;&#xA;    const [rStart, setRstart] = useState(0);&#xA;    const [debouncedRstart] = useDebounce(rStart, 500);&#xA;&#xA;    const [rEnd, setRend] = useState(10);&#xA;    const [debouncedRend] = useDebounce(rEnd, 500);&#xA;&#xA;    const handleLoadedData = async (e) => {&#xA;        const el = e.target;&#xA;        const meta = {&#xA;            name: inputVideoFile.name,&#xA;            duration: el.duration,&#xA;            videoWidth: 50,&#xA;            videoHeight: 50,&#xA;        };&#xA;        setVideoMeta(meta);&#xA;        const thumbNails = await getThumbnails(meta);&#xA;        setThumbNails(thumbNails);&#xA;    };&#xA;&#xA;    const getThumbnails = async ({ duration }) => {&#xA;        if (!FF.isLoaded()) await FF.load();&#xA;        setThumbnailIsProcessing(true);&#xA;        let MAX_NUMBER_OF_IMAGES = 15;&#xA;        let NUMBER_OF_IMAGES = duration &lt; MAX_NUMBER_OF_IMAGES ? duration : 15;&#xA;        let offset =&#xA;            duration === MAX_NUMBER_OF_IMAGES ? 1 : duration / NUMBER_OF_IMAGES;&#xA;&#xA;        const arrayOfImageURIs = [];&#xA;        FF.FS("writeFile", inputVideoFile.name, await fetchFile(inputVideoFile));&#xA;&#xA;        for (let i = 0; i &lt; NUMBER_OF_IMAGES; i&#x2B;&#x2B;) {&#xA;            let startTimeInSecs = helpers.toTimeString(Math.round(i * offset));&#xA;&#xA;            try {&#xA;                await FF.run(&#xA;                    "-ss",&#xA;                    startTimeInSecs,&#xA;                    "-i",&#xA;                    inputVideoFile.name,&#xA;                    "-t",&#xA;                    "00:00:1.000",&#xA;                    "-vf",&#xA;                    `scale=150:-1`,&#xA;                    `img${i}.png`,&#xA;                );&#xA;                const data = FF.FS("readFile", `img${i}.png`);&#xA;&#xA;                let blob = new Blob([data.buffer], { type: "image/png" });&#xA;                let dataURI = await helpers.readFileAsBase64(blob);&#xA;                FF.FS("unlink", `img${i}.png`);&#xA;                arrayOfImageURIs.push(dataURI);&#xA;            } catch (error) {&#xA;                // console.log({ message: error });&#xA;            }&#xA;        }&#xA;        setThumbnailIsProcessing(false);&#xA;&#xA;        return arrayOfImageURIs;&#xA;    };&#xA;    const handleTrim = async () => {&#xA;        // setTrimIsProcessing(true);&#xA;        let startTime = ((rStart / 100) * videoMeta.duration).toFixed(2);&#xA;        let offset = ((rEnd / 100) * videoMeta.duration - startTime).toFixed(2);&#xA;        try {&#xA;            FF.FS("writeFile", inputVideoFile.name, await fetchFile(inputVideoFile));&#xA;            await FF.run(&#xA;                "-ss",&#xA;                helpers.toTimeString(startTime),&#xA;                "-i",&#xA;                inputVideoFile.name,&#xA;                "-t",&#xA;                helpers.toTimeString(offset),&#xA;                "-c",&#xA;                "copy",&#xA;                "ping.mp4",&#xA;            );&#xA;            const data = FF.FS("readFile", "ping.mp4");&#xA;            const dataURL = await helpers.readFileAsBase64(&#xA;                new Blob([data.buffer], { type: "video/mp4" }),&#xA;            );&#xA;&#xA;            setTrimmedVideoFile(dataURL);&#xA;        } catch (error) {&#xA;            // console.log(error);&#xA;        } finally {&#xA;            // setTrimIsProcessing(false);&#xA;        }&#xA;    };&#xA;&#xA;    const handleRangeChange = (type, event) => {&#xA;        const limit = parseInt((120 / videoMeta.duration) * 100);&#xA;        if (type === "start") {&#xA;            if (rEnd - rStart > limit) {&#xA;                setRend(parseInt(event.target.value) &#x2B; limit);&#xA;                setRstart(parseInt(event.target.value));&#xA;            } else {&#xA;                setRstart(parseInt(event.target.value));&#xA;            }&#xA;        } else if (type === "end") {&#xA;            if (rEnd - rStart > limit) {&#xA;                setRstart(parseInt(event.target.value) - limit);&#xA;                setRend(parseInt(event.target.value));&#xA;            } else {&#xA;                setRend(parseInt(event.target.value));&#xA;            }&#xA;        }&#xA;    };&#xA;&#xA;    useEffect(() => {&#xA;        if (videoMeta?.duration > 120) {&#xA;            const limit = parseInt((120 / videoMeta.duration) * 100);&#xA;            setRend(limit);&#xA;        }&#xA;    }, [videoMeta?.duration]);&#xA;&#xA;    useEffect(() => {&#xA;        const videoFormData = new FormData();&#xA;        if (videoFile) {&#xA;            videoFormData.append("file", videoFile);&#xA;            const handleChange = async () => {&#xA;                setInputVideoFile(videoFile);&#xA;                setURL(await helpers.readFileAsBase64(videoFile));&#xA;            };&#xA;            handleChange();&#xA;        }&#xA;    }, []);&#xA;&#xA;    useEffect(() => {&#xA;        if (videoMeta) {&#xA;            onConfirmClickHandler(handleTrim);&#xA;        }&#xA;    }, [isConfirmClicked]);&#xA;&#xA;    useEffect(() => {&#xA;        if (debouncedRend == rEnd &amp;&amp; debouncedRstart == rStart &amp;&amp; videoMeta) {&#xA;            handleTrim();&#xA;        }&#xA;    }, [debouncedRend, debouncedRstart, videoMeta]);&#xA;&#xA;    return (&#xA;        &lt;>&#xA;            <article classname="grid_txt_2">&#xA;                &#xA;                    {trimmedVideoFile ? (&#xA;                        &#xA;                    ) : (&#xA;                        &#xA;                    )}&#xA;                &#xA;            </article>&#xA;            &#xA;        >&#xA;    );&#xA;};&#xA;

    &#xA;

    next.config.js

    &#xA;

    const nextConfig = {&#xA;    async headers() {&#xA;        return [&#xA;            {&#xA;                source: "/(.*)",&#xA;                headers: [&#xA;                    { key: "Cross-Origin-Opener-Policy", value: "same-origin" },&#xA;                    { key: "Cross-Origin-Embedder-Policy", value: "credentialless" },&#xA;                ],&#xA;            },&#xA;        ];&#xA;    },&#xA;    &#xA;};&#xA;

    &#xA;

    This works seamlessly in Chrome and Edge, but it encounter issues (SharedArrayBuffer is not defined) in Firefox and Safari. How can we ensure it functions impeccably across all major browsers ?

    &#xA;

    When utilizing key : "Cross-Origin-Embedder-Policy", value : "require-corp" , I encounter an error while fetching images/scripts from cross-origin sources, resulting in "net::ERR_BLOCKED_BY_RESPONSE.NotSameOriginAfterDefaultedToSameOriginByCoep 200 (OK)". Cany you suggest me how can I resolve this issue ?

    &#xA;

  • Recording voice using HTML5 and processing it with ffmpeg

    22 mars 2015, par user3789242

    I need to use ffmpeg in my javascript/HTML5 project which allows the user to select the format he wants the audio to open with.I don’t know anything about ffmpeg and I’ve been doing lots of research I don’t know how to use it in my project. I found an example https://github.com/sopel39/audioconverter.js but the problem how can I install the ffmpeg.js which is 8 mg to m project. please if someone can help me I’ll be very thankfull
    here is my full code :

    the javascript page :

    // variables
    var leftchannel = [];
    var rightchannel = [];
    var recorder = null;
    var recording = false;
    var recordingLength = 0;
    var volume = null;
    var audioInput = null;
    var sampleRate = 44100;
    var audioContext = null;
    var context = null;
    var outputString;



    if (!navigator.getUserMedia)
    navigator.getUserMedia = navigator.getUserMedia ||
                            navigator.webkitGetUserMedia ||
                            navigator.mozGetUserMedia ||
                            navigator.msGetUserMedia;

    if (navigator.getUserMedia){
    navigator.getUserMedia({audio:true}, success, function(e) {
    alert('Error capturing audio.');
    });
    } else alert('getUserMedia not supported in this browser.');



    function getVal(value)
     {

    // if R is pressed, we start recording
    if ( value == "record"){
       recording = true;
       // reset the buffers for the new recording
       leftchannel.length = rightchannel.length = 0;
       recordingLength = 0;
       document.getElementById('output').innerHTML="Recording now...";

    // if S is pressed, we stop the recording and package the WAV file
    } else if ( value == "stop" ){

       // we stop recording
       recording = false;
       document.getElementById('output').innerHTML="Building wav file...";

       // we flat the left and right channels down
       var leftBuffer = mergeBuffers ( leftchannel, recordingLength );
       var rightBuffer = mergeBuffers ( rightchannel, recordingLength );
       // we interleave both channels together
       var interleaved = interleave ( leftBuffer, rightBuffer );



       var buffer = new ArrayBuffer(44 + interleaved.length * 2);
       var view = new DataView(buffer);

       // RIFF chunk descriptor
       writeUTFBytes(view, 0, 'RIFF');
       view.setUint32(4, 44 + interleaved.length * 2, true);
       writeUTFBytes(view, 8, 'WAVE');
       // FMT sub-chunk
       writeUTFBytes(view, 12, 'fmt ');
       view.setUint32(16, 16, true);
       view.setUint16(20, 1, true);
       // stereo (2 channels)
       view.setUint16(22, 2, true);
       view.setUint32(24, sampleRate, true);
       view.setUint32(28, sampleRate * 4, true);
       view.setUint16(32, 4, true);
       view.setUint16(34, 16, true);
       // data sub-chunk
       writeUTFBytes(view, 36, 'data');
       view.setUint32(40, interleaved.length * 2, true);


       var lng = interleaved.length;
       var index = 44;
       var volume = 1;
       for (var i = 0; i &lt; lng; i++){
           view.setInt16(index, interleaved[i] * (0x7FFF * volume), true);
           index += 2;
       }

       var blob = new Blob ( [ view ], { type : 'audio/wav' } );

       // let's save it locally

       document.getElementById('output').innerHTML='Handing off the file now...';
       var url = (window.URL || window.webkitURL).createObjectURL(blob);

       var li = document.createElement('li');
       var au = document.createElement('audio');
       var hf = document.createElement('a');

       au.controls = true;
       au.src = url;
       hf.href = url;
       hf.download = 'audio_recording_' + new Date().getTime() + '.wav';
       hf.innerHTML = hf.download;
       li.appendChild(au);
       li.appendChild(hf);
       recordingList.appendChild(li);

    }
    }


    function success(e){

    audioContext = window.AudioContext || window.webkitAudioContext;
    context = new audioContext();


    volume = context.createGain();

    // creates an audio node from the microphone incoming stream(source)
    source = context.createMediaStreamSource(e);

    // connect the stream(source) to the gain node
    source.connect(volume);

    var bufferSize = 2048;

    recorder = context.createScriptProcessor(bufferSize, 2, 2);

    //node for the visualizer
    analyser = context.createAnalyser();
    analyser.smoothingTimeConstant = 0.3;
    analyser.fftSize = 512;

    splitter = context.createChannelSplitter();
    //when recording happens
    recorder.onaudioprocess = function(e){

       if (!recording) return;
       var left = e.inputBuffer.getChannelData (0);
       var right = e.inputBuffer.getChannelData (1);

       leftchannel.push (new Float32Array (left));
       rightchannel.push (new Float32Array (right));
       recordingLength += bufferSize;

       // get the average for the first channel
       var array =  new Uint8Array(analyser.frequencyBinCount);
       analyser.getByteFrequencyData(array);

       var c=document.getElementById("myCanvas");
       var ctx = c.getContext("2d");
       // clear the current state
       ctx.clearRect(0, 0, 1000, 325);
       var gradient = ctx.createLinearGradient(0,0,0,300);
       gradient.addColorStop(1,'#000000');
       gradient.addColorStop(0.75,'#ff0000');
       gradient.addColorStop(0.25,'#ffff00');
       gradient.addColorStop(0,'#ffffff');
       // set the fill style
       ctx.fillStyle=gradient;
       drawSpectrum(array);
       function drawSpectrum(array) {
           for ( var i = 0; i &lt; (array.length); i++ ){
                   var value = array[i];
                   ctx.fillRect(i*5,325-value,3,325);
               }

       }
    }

    function getAverageVolume(array) {
       var values = 0;
       var average;

       var length = array.length;

       // get all the frequency amplitudes
       for (var i = 0; i &lt; length; i++) {
           values += array[i];
       }

       average = values / length;
       return average;
    }

       // we connect the recorder(node to destination(speakers))
       volume.connect(splitter);
       splitter.connect(analyser, 0, 0);

       analyser.connect(recorder);
       recorder.connect(context.destination);

    }




    function mergeBuffers(channelBuffer, recordingLength){
    var result = new Float32Array(recordingLength);
    var offset = 0;
    var lng = channelBuffer.length;
    for (var i = 0; i &lt; lng; i++){
    var buffer = channelBuffer[i];
    result.set(buffer, offset);
    offset += buffer.length;
    }
       return result;
      }

    function interleave(leftChannel, rightChannel){
    var length = leftChannel.length + rightChannel.length;
    var result = new Float32Array(length);

    var inputIndex = 0;

    for (var index = 0; index &lt; length; ){
    result[index++] = leftChannel[inputIndex];
    result[index++] = rightChannel[inputIndex];
    inputIndex++;
    }
    return result;
    }


    function writeUTFBytes(view, offset, string){
    var lng = string.length;
    for (var i = 0; i &lt; lng; i++){

    view.setUint8(offset + i, string.charCodeAt(i));
    }
    }

    and here is the html code :

       
       
       
       <code class="echappe-js">&lt;script src=&quot;http://stackoverflow.com/feeds/tag/js/functions.js&quot;&gt;&lt;/script&gt;


    • Hacking the Popcorn Hour C-200

      3 mai 2010, par Mans — Hardware, MIPS

      Update : A new firmware version has been released since the publication of this article. I do not know if the procedure described below will work with the new version.

      The Popcorn Hour C-200 is a Linux-based media player with impressive specifications. At its heart is a Sigma Designs SMP8643 system on chip with a 667MHz MIPS 74Kf as main CPU, several co-processors, and 512MB of DRAM attached. Gigabit Ethernet, SATA, and USB provide connectivity with the world around it. With a modest $299 on the price tag, the temptation to repurpose the unit as a low-power server or cheap development board is hard to resist. This article shows how such a conversion can be achieved.

      Kernel

      The PCH runs a patched Linux 2.6.22.19 kernel. A source tarball is available from the manufacturer. This contains the sources with Sigma support patches, Con Kolivas’ patch set (scheduler tweaks), and assorted unrelated changes. Properly split patches are unfortunately not available. I have created a reduced patch against vanilla 2.6.22.19 with only Sigma-specific changes, available here.

      The installed kernel has a number of features disabled, notably PTY support and oprofile. We will use kexec to load a more friendly one.

      As might be expected, the PCH kernel does not have kexec support enabled. It does however, by virtue of using closed-source components, support module loading. This lets us turn kexec into a module and load it. A patch for this is available here. To build the module, apply the patch to the PCH sources and build using this configuration. This will produce two modules, kexec.ko and mips_kexec.ko. No other products of this build will be needed.

      The replacement kernel can be built from the PCH sources or, if one prefers, from vanilla 2.6.22.19 with the Sigma-only patch. For the latter case, this config provides a minimal starting point suitable for NFS-root.

      When configuring the kernel, make sure CONFIG_TANGOX_IGNORE_CMDLINE is enabled. Otherwise the command line will be overridden by a useless one stored in flash. A good command line can be set with CONFIG_CMDLINE (under “Kernel hacking” in menuconfig) or passed from kexec.

      Taking control

      In order to load our kexec module, we must first gain root privileges on the PCH, and here a few features of the system are working to our advantage :

      1. The PCH allows mounting any NFS export to access media files stored there.
      2. There is an HTTP server running. As root.
      3. This HTTP server can be readily instructed to fetch files from an NFS mount.
      4. Files with a name ending in .cgi are executed. As root.

      All we need do to profit from this is place the kexec modules, the kexec userspace tools, and a simple script on an NFS export. Once this is done, and the mount point configured on the PCH, a simple HTTP request will send the old kernel screaming to /dev/null, our shiny new kernel taking its place.

      The rootfs

      A kernel is mostly useless without a root filesystem containing tools and applications. A number of tools for cross-compiling a full system exist, each with its strengths and weaknesses. The only thing to look out for is the version of kernel headers used (usually a linux-headers package). As we will be running an old kernel, chances are the default version is too recent. Other than this, everything should be by the book.

      Assembling the parts

      Having gathered all the pieces, it is now time to assemble the hack. The following steps are suitable for an NFS-root system. Adaptation to a disk-based system is left as an exercise.

      1. Build a rootfs for MIPS 74Kf little endian. Make sure kernel headers used are no more recent than 2.6.22.x. Include a recent version of the kexec userspace tools.
      2. Fetch and unpack the PCH kernel sources.
      3. Apply the modular kexec patch.
      4. Using this config, build the modules and install them as usual to the rootfs. The version string must be 2.6.22.19-19-4.
      5. From either the same kernel sources or plain 2.6.22.19 with Sigma patches, build a vmlinux and (optionally) modules using this config. Modify the compiled-in command line to point to the correct rootfs. Set the version string to something other than in the previous step.
      6. Copy vmlinux to any directory in the rootfs.
      7. Copy kexec.sh and kexec.cgi to the same directory as vmlinux.
      8. Export the rootfs over NFS with full read/write permissions for the PCH.
      9. Power on the PCH, and update to latest firmware.
      10. Configure an NFS mount of the rootfs.
      11. Navigate to the rootfs in the PCH UI. A directory listing of bin, dev, etc. should be displayed.
      12. On the host system, run the kexec.sh script with the target hostname or IP address as argument.
      13. If all goes well, the new kernel will boot and mount the rootfs.

      Serial console

      A serial console is indispensable for solving boot problems. The PCH board has two UART connectors. We will use the one labeled UART0. The pinout is as follows (not standard PC pinout).

              +-----------+
             2| * * * * * |10
             1| * * * * * |9
              -----------+
                J7 UART0
          /---------------------/ board edge
      
      Pin Function
      1 +5V
      5 Rx
      6 Tx
      10 GND

      The signals are 3.3V so a converter, e.g. MAX202, is required for connecting this to a PC serial port. The default port settings are 115200 bps 8n1.