Recherche avancée

Médias (1)

Mot : - Tags -/illustrator

Autres articles (83)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6768)

  • react native app doesn't load my video which is made by ffmpeg

    10 mars 2023, par yabbee

    I'm working on react native expo project and using expo-av to play video.I'm experimenting on my iphone and it's working almost fine. I copy and paste the sample code on expo av doc and Big Buck Bunny video is loaded successfully and able to play. But, there is a video that can't be played on my app. I have a video which is stored on s3 server. This is the mp4 video made by ffmpeg command on my computer and manually uploaded it on s3. I can download it and play on my machine. But when I try to load that video on my expo app, the video doesn't show up on the component at all. I write video source correctly including https:// but doesn't show up anything. How can i solve this problem ? I'm using expo 48.0.0 , expo-av 13.2.1 and expo-dev-client 2.1.5 now.

    


    Here is the ffmpeg code that I've used to make video. As you can see, I'm making a retro video by overlaying grain effect mp4 video which I downloaded.

    


    ffmpeg -i /Users/yosuke/Desktop/ffmpeg_playground/effects/grainAndFlash.mp4 -i 
{inputFilePath} -filter_complex "[0:a][1:a]amerge[mixedAudio];
[0]format=rgba,colorchannelmixer=aa=0.25[fg];[1][fg]overlay[out];
[out]trim=0:32,setpts=PTS-STARTPTS[video]" -map "[video]" -map "[mixedAudio]" -
pix_fmt yuv420p -c:v libx264 -crf 18 -shortest {outputFilePath}


    


    Here is the Expo app code

    


    import React, { useState, useEffect, useContext, useRef } from &#x27;react&#x27;;&#xA;import { View, Text, ScrollView, TouchableOpacity, Dimensions } from &#x27;react-native&#x27;;&#xA;&#xA;const Container = () => {&#xA;    const vidRef = useRef(null);&#xA;return (&#xA;    <scrollview style="{{" 1="1">&#xA;      &#xA;    </scrollview>&#xA;  );&#xA;};&#xA;&#xA;export default Container;&#xA;

    &#xA;

  • Convert Webrtc track stream to URL (RTSP/UDP/RTP/Http) in Video tag

    19 juillet 2020, par Zeeshan Younis

    I am new in WebRTC and i have done client/server connection, from client i choose WebCam and post stream to server using Track and on Server side i am getting that track and assign track stream to video source. Everything till now fine but problem is now i include AI(Artificial Intelligence) and now i want to convert my track stream to URL maybe UDP/RTSP/RTP etc. So AI will use that URL for object detection. I don't know how we can convert track stream to URL.&#xA;Although there is a couple of packages like https://ffmpeg.org/ and RTP to Webrtc etc, i am using Nodejs, Socket.io and Webrtc, below you can check my client and server side code for getting and posting stream, i am following thi github code https://github.com/Basscord/webrtc-video-broadcast.&#xA;Now my main concern is to make track as a URL for video tag, is it possible or not or please suggest, any help would be appreciated.

    &#xA;

    Server.js

    &#xA;

    This is nodejs server code&#xA;

    &#xD;&#xA;
    &#xD;&#xA;
    const express = require("express");&#xA;const app = express();&#xA;&#xA;let broadcaster;&#xA;const port = 4000;&#xA;&#xA;const http = require("http");&#xA;const server = http.createServer(app);&#xA;&#xA;const io = require("socket.io")(server);&#xA;app.use(express.static(__dirname &#x2B; "/public"));&#xA;&#xA;io.sockets.on("error", e => console.log(e));&#xA;io.sockets.on("connection", socket => {&#xA;  socket.on("broadcaster", () => {&#xA;    broadcaster = socket.id;&#xA;    socket.broadcast.emit("broadcaster");&#xA;  });&#xA;  socket.on("watcher", () => {&#xA;    socket.to(broadcaster).emit("watcher", socket.id);&#xA;  });&#xA;  socket.on("offer", (id, message) => {&#xA;    socket.to(id).emit("offer", socket.id, message);&#xA;  });&#xA;  socket.on("answer", (id, message) => {&#xA;    socket.to(id).emit("answer", socket.id, message);&#xA;  });&#xA;  socket.on("candidate", (id, message) => {&#xA;    socket.to(id).emit("candidate", socket.id, message);&#xA;  });&#xA;  socket.on("disconnect", () => {&#xA;    socket.to(broadcaster).emit("disconnectPeer", socket.id);&#xA;  });&#xA;});&#xA;server.listen(port, () => console.log(`Server is running on port ${port}`));

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;

    Broadcast.js&#xA;This is the code for emit stream(track)&#xA;

    &#xD;&#xA;
    &#xD;&#xA;
    const peerConnections = {};&#xA;const config = {&#xA;  iceServers: [&#xA;    {&#xA;      urls: ["stun:stun.l.google.com:19302"]&#xA;    }&#xA;  ]&#xA;};&#xA;&#xA;const socket = io.connect(window.location.origin);&#xA;&#xA;socket.on("answer", (id, description) => {&#xA;  peerConnections[id].setRemoteDescription(description);&#xA;});&#xA;&#xA;socket.on("watcher", id => {&#xA;  const peerConnection = new RTCPeerConnection(config);&#xA;  peerConnections[id] = peerConnection;&#xA;&#xA;  let stream = videoElement.srcObject;&#xA;  stream.getTracks().forEach(track => peerConnection.addTrack(track, stream));&#xA;&#xA;  peerConnection.onicecandidate = event => {&#xA;    if (event.candidate) {&#xA;      socket.emit("candidate", id, event.candidate);&#xA;    }&#xA;  };&#xA;&#xA;  peerConnection&#xA;    .createOffer()&#xA;    .then(sdp => peerConnection.setLocalDescription(sdp))&#xA;    .then(() => {&#xA;      socket.emit("offer", id, peerConnection.localDescription);&#xA;    });&#xA;});&#xA;&#xA;socket.on("candidate", (id, candidate) => {&#xA;  peerConnections[id].addIceCandidate(new RTCIceCandidate(candidate));&#xA;});&#xA;&#xA;socket.on("disconnectPeer", id => {&#xA;  peerConnections[id].close();&#xA;  delete peerConnections[id];&#xA;});&#xA;&#xA;window.onunload = window.onbeforeunload = () => {&#xA;  socket.close();&#xA;};&#xA;&#xA;// Get camera and microphone&#xA;const videoElement = document.querySelector("video");&#xA;const audioSelect = document.querySelector("select#audioSource");&#xA;const videoSelect = document.querySelector("select#videoSource");&#xA;&#xA;audioSelect.onchange = getStream;&#xA;videoSelect.onchange = getStream;&#xA;&#xA;getStream()&#xA;  .then(getDevices)&#xA;  .then(gotDevices);&#xA;&#xA;function getDevices() {&#xA;  return navigator.mediaDevices.enumerateDevices();&#xA;}&#xA;&#xA;function gotDevices(deviceInfos) {&#xA;  window.deviceInfos = deviceInfos;&#xA;  for (const deviceInfo of deviceInfos) {&#xA;    const option = document.createElement("option");&#xA;    option.value = deviceInfo.deviceId;&#xA;    if (deviceInfo.kind === "audioinput") {&#xA;      option.text = deviceInfo.label || `Microphone ${audioSelect.length &#x2B; 1}`;&#xA;      audioSelect.appendChild(option);&#xA;    } else if (deviceInfo.kind === "videoinput") {&#xA;      option.text = deviceInfo.label || `Camera ${videoSelect.length &#x2B; 1}`;&#xA;      videoSelect.appendChild(option);&#xA;    }&#xA;  }&#xA;}&#xA;&#xA;function getStream() {&#xA;  if (window.stream) {&#xA;    window.stream.getTracks().forEach(track => {&#xA;      track.stop();&#xA;    });&#xA;  }&#xA;  const audioSource = audioSelect.value;&#xA;  const videoSource = videoSelect.value;&#xA;  const constraints = {&#xA;    audio: { deviceId: audioSource ? { exact: audioSource } : undefined },&#xA;    video: { deviceId: videoSource ? { exact: videoSource } : undefined }&#xA;  };&#xA;  return navigator.mediaDevices&#xA;    .getUserMedia(constraints)&#xA;    .then(gotStream)&#xA;    .catch(handleError);&#xA;}&#xA;&#xA;function gotStream(stream) {&#xA;  window.stream = stream;&#xA;  audioSelect.selectedIndex = [...audioSelect.options].findIndex(&#xA;    option => option.text === stream.getAudioTracks()[0].label&#xA;  );&#xA;  videoSelect.selectedIndex = [...videoSelect.options].findIndex(&#xA;    option => option.text === stream.getVideoTracks()[0].label&#xA;  );&#xA;  videoElement.srcObject = stream;&#xA;  socket.emit("broadcaster");&#xA;}&#xA;&#xA;function handleError(error) {&#xA;  console.error("Error: ", error);&#xA;}

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;

    RemoteServer.js&#xA;This code is getting track and assign to video tag&#xA;

    &#xD;&#xA;
    &#xD;&#xA;
    let peerConnection;&#xA;const config = {&#xA;  iceServers: [&#xA;    {&#xA;      urls: ["stun:stun.l.google.com:19302"]&#xA;    }&#xA;  ]&#xA;};&#xA;&#xA;const socket = io.connect(window.location.origin);&#xA;const video = document.querySelector("video");&#xA;&#xA;socket.on("offer", (id, description) => {&#xA;  peerConnection = new RTCPeerConnection(config);&#xA;  peerConnection&#xA;    .setRemoteDescription(description)&#xA;    .then(() => peerConnection.createAnswer())&#xA;    .then(sdp => peerConnection.setLocalDescription(sdp))&#xA;    .then(() => {&#xA;      socket.emit("answer", id, peerConnection.localDescription);&#xA;    });&#xA;  peerConnection.ontrack = event => {&#xA;    video.srcObject = event.streams[0];&#xA;  };&#xA;  peerConnection.onicecandidate = event => {&#xA;    if (event.candidate) {&#xA;      socket.emit("candidate", id, event.candidate);&#xA;    }&#xA;  };&#xA;});&#xA;&#xA;socket.on("candidate", (id, candidate) => {&#xA;  peerConnection&#xA;    .addIceCandidate(new RTCIceCandidate(candidate))&#xA;    .catch(e => console.error(e));&#xA;});&#xA;&#xA;socket.on("connect", () => {&#xA;  socket.emit("watcher");&#xA;});&#xA;&#xA;socket.on("broadcaster", () => {&#xA;  socket.emit("watcher");&#xA;});&#xA;&#xA;socket.on("disconnectPeer", () => {&#xA;  peerConnection.close();&#xA;});&#xA;&#xA;window.onunload = window.onbeforeunload = () => {&#xA;  socket.close();&#xA;};

    &#xD;&#xA;

    &#xD;&#xA;

    &#xD;&#xA;&#xA;

  • Permissions issue with Python and ffmpeg on a Mac

    13 avril 2020, par EventHorizon

    I am fairly new to Python ( 4 weeks), and I have been struggling with this all day.

    &#xA;&#xA;

    I am using MacOS 10.13, Python 3.7 via Anaconda Navigator 1.9.12 and Spyder 4.0.1.

    &#xA;&#xA;

    Somehow (only a noob, remember) I had 2 Anaconda environments. I don't do production code, just research, so I figured I would make life simple and just use the base environment. I deleted the other environment.

    &#xA;&#xA;

    I had previously got FFmpeg working and was able to do frame grabs, build mpeg animations, and convert them to gifs for blog posts and such. I had FFmpeg installed in the directories associated with the deleted environment, so it went away.

    &#xA;&#xA;

    No worries, I got the git URL, used Terminal to install it in /opt/anaconda3/bin. It's all there and I can run FFmpeg from the Terminal.

    &#xA;&#xA;

    My problem : When I attempt to run a module that previously worked fine, I get the following message :

    &#xA;&#xA;

    [Errno 13] Permission denied : '/opt/anaconda3/bin/ffmpeg'

    &#xA;&#xA;

    In my module I set the default location of FFmpeg : plt.rcParams['animation.ffmpeg_path'] = '/opt/anaconda3/bin/ffmpeg'

    &#xA;&#xA;

    In my module I have the following lines :

    &#xA;&#xA;

    writer = animation.FFMpegWriter(fps=frameRate, metadata=metadata)&#xA;writer.setup(fig, "animation.mp4", 100)&#xA;

    &#xA;&#xA;

    This calls matplotlib's 'animation.py', which runs the following :

    &#xA;&#xA;

    def setup(self, fig, outfile, dpi=None):&#xA;    &#x27;&#x27;&#x27;&#xA;    Perform setup for writing the movie file.&#xA;&#xA;    Parameters&#xA;    ----------&#xA;    fig : `~matplotlib.figure.Figure`&#xA;        The figure object that contains the information for frames&#xA;    outfile : str&#xA;        The filename of the resulting movie file&#xA;    dpi : int, optional&#xA;        The DPI (or resolution) for the file.  This controls the size&#xA;        in pixels of the resulting movie file. Default is fig.dpi.&#xA;    &#x27;&#x27;&#x27;&#xA;    self.outfile = outfile&#xA;    self.fig = fig&#xA;    if dpi is None:&#xA;        dpi = self.fig.dpi&#xA;    self.dpi = dpi&#xA;    self._w, self._h = self._adjust_frame_size()&#xA;&#xA;    # Run here so that grab_frame() can write the data to a pipe. This&#xA;    # eliminates the need for temp files.&#xA;    self._run()&#xA;&#xA;def _run(self):&#xA;    # Uses subprocess to call the program for assembling frames into a&#xA;    # movie file.  *args* returns the sequence of command line arguments&#xA;    # from a few configuration options.&#xA;    command = self._args()&#xA;    _log.info(&#x27;MovieWriter.run: running command: %s&#x27;, command)&#xA;    PIPE = subprocess.PIPE&#xA;    self._proc = subprocess.Popen(&#xA;        command, stdin=PIPE, stdout=PIPE, stderr=PIPE,&#xA;        creationflags=subprocess_creation_flags)&#xA;

    &#xA;&#xA;

    Everything works fine up to the last line (i.e. 'command' looks like a well-formatted FFmpeg command line, PIPE returns -1) but subprocess.Popen() bombs out with the error message above.

    &#xA;&#xA;

    I have tried changing file permissions - taking a sledgehammer approach and setting everything in /opt/anaconda3/bin/ffmpeg to 777, read, write, and execute. But that doesn't seem to make any difference. I really am clueless when it comes to Apple's OS, file permissions, etc. Any suggestions ?

    &#xA;