
Recherche avancée
Médias (2)
-
SPIP - plugins - embed code - Exemple
2 septembre 2013, par
Mis à jour : Septembre 2013
Langue : français
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (61)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)
Sur d’autres sites (9363)
-
The system cannot find the file specified error when trying to execute FFMpeg command with C# (same code works fine in a different app)
5 mars 2023, par m_krI know there are similar questions to this one. I have gone through every single one I could find and nothing worked for me. Here is my issue :


I am trying to execute a FFMpeg command in command-line through .NET.


Before anything I tried doing it with the following code :


public static string executeCommand(string commandToBeExecuted)
 {
 Process cmd = new Process();
 cmd.StartInfo.FileName = "cmd.exe";
 cmd.StartInfo.RedirectStandardInput = true;
 cmd.StartInfo.RedirectStandardOutput = true;
 cmd.StartInfo.CreateNoWindow = true;
 cmd.StartInfo.UseShellExecute = false;
 cmd.Start();

 cmd.StandardInput.WriteLine(commandToBeExecuted);
 cmd.StandardInput.Flush();
 cmd.StandardInput.Close();
 cmd.WaitForExit();
 return cmd.StandardOutput.ReadToEnd();
 }



Sending the "ffmpeg -h" command in commandToBeExecuted. This did not work.


I next tried the following solution :


public static string ffmpegCommand(string commandToBeExecuted)
 {
 ProcessStartInfo startInfo = new ProcessStartInfo();
 startInfo.CreateNoWindow = false;
 startInfo.UseShellExecute = false;
 startInfo.FileName = "c:\\ffmpeg\\bin\\ffmpeg.exe";
 startInfo.WindowStyle = ProcessWindowStyle.Hidden;
 startInfo.Arguments = "-h";

 startInfo.RedirectStandardOutput = true;
 startInfo.RedirectStandardError = true;


 Process exeProcess = Process.Start(startInfo);

 // string error = exeProcess.StandardError.ReadToEnd();
 string output = exeProcess.StandardOutput.ReadToEnd();
 exeProcess.WaitForExit();
 return output;
 }



This returns the following error :




The system cannot find the file specified




I am assuming this is referring to this part of the code :


startInfo.FileName = "c:\\ffmpeg\\bin\\ffmpeg.exe";



However, I checked and this is the correct path to my ffmpeg.exe file. On an even weirder note, this code works correct when tested in a new .net console application. However, I am creating an extension for OutSystems in integration, and when testing this code there it no longer works. The long exception from the logs is the following :




CssbobffmpegCommandTestFolder
System.ComponentModel.Win32Exception : The system cannot find the file specified
at Object.s [as getException] (https://personal-jwy0bfog.outsystemscloud.com/FFMpegCommandGeneratorFFProbeVisual/scripts/OutSystems.js?RnlDcii3Xz75iIHHERIZtA:2:10241)
at c.onSuccess (https://personal-jwy0bfog.outsystemscloud.com/FFMpegCommandGeneratorFFProbeVisual/scripts/OutSystems.js?RnlDcii3Xz75iIHHERIZtA:3:7232)
at XMLHttpRequest. (https://personal-jwy0bfog.outsystemscloud.com/FFMpegCommandGeneratorFFProbeVisual/scripts/OutSystems.js?RnlDcii3Xz75iIHHERIZtA:3:2648)




I researched similar problems and tried the following solutions :


In place of :


startInfo.FileName = "c:\\ffmpeg\\bin\\ffmpeg.exe";



I tried :


startInfo.WorkingDirectory = "c:\\ffmpeg\\bin";
 startInfo.FileName = @"ffmpeg.exe";



I also tried changing the :


startInfo.Arguments = "-h";



to :


startInfo.Arguments = "/C -h";



I tried to "add new item" to my solution : the ffmpeg.exe file, and I tried the following logic :


public static string testingNewApproachTwoThree(string commandToBeExecuted)
 {
 string res;
 ProcessStartInfo startInfo = new ProcessStartInfo();

 startInfo.CreateNoWindow = false;
 startInfo.UseShellExecute = false;
 startInfo.FileName = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "ffmpeg\\ffmpeg.exe");
 startInfo.Arguments = "-h";
 startInfo.RedirectStandardOutput = true;
 //startInfo.RedirectStandardError = true;

 res = string.Format(
 "Executing \"{0}\" with arguments \"{1}\".\r\n",
 startInfo.FileName,
 startInfo.Arguments) + " NEXT: ";

 try
 {
 using (Process process = Process.Start(startInfo))
 {
 while (!process.StandardOutput.EndOfStream)
 {
 res = res + process.StandardOutput.ReadLine();

 }

 process.WaitForExit();
 }
 }
 catch (Exception ex)
 {
 res = res + "exception:" + ex.Message;
 }

 return res;
 }



as suggested in a different question.


I tried changing the capitalization of letters in the specified filepath to make sure it matches the naming of my folders. Nothing worked.


Any ideas ?


-
How to interpret ffmpeg recording options available for a webcam (directshow) ?
5 janvier 2023, par Jones659I am trying to create a GUI for personal use, that allows someone to customise recording and converting options of ffmpeg, without directly using the command line. At the moment, I am learning about different parameters and flags in ffmpeg.


Apologies in advance if I end up asking some stupid questions, I am on a learning journey at the moment, unfortunately not all of this info is available online in an easily understandable way.


I have a USB webcam which reported having the following options available to it :


[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=640x480 fps=5 max s=640x480 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=640x480 fps=5 max s=640x480 fps=30 (tv, bt470bg/bt709/unknown, topleft) chroma_location=topleft
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=352x288 fps=5 max s=352x288 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=352x288 fps=5 max s=352x288 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=320x240 fps=5 max s=320x240 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=320x240 fps=5 max s=320x240 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=176x144 fps=5 max s=176x144 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=176x144 fps=5 max s=176x144 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=160x120 fps=5 max s=160x120 fps=30
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=160x120 fps=5 max s=160x120 fps=30 (tv, bt470bg/bt709/unknown, topleft)
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=1280x1024 fps=5 max s=1280x1024 fps=9
[dshow @ 00000000003f9340] pixel_format=yuyv422 min s=1280x1024 fps=5 max s=1280x1024 fps=9 (tv, bt470bg/bt709/unknown, topleft)



I just want to get to the bottom of how I should interpret this, apologies that I will ask multiple questions :


- 

-
The fact that both resolution and fps have a min and max value (for every option) seems to imply that these two parameters are supposably uncontrollably variable, right ? In practice, the fps has been variable depending on brightness, however the resolution has not been - is it safe to assume that video imaging devices (especially such as a webcam) do not have variable resolution ?


-
Secondly, why is it that every option is listed twice, except half of them specify extra info, such as color_range, color_space, and chroma_location ? Is this just a quirk ? Surely those extra parameter options should not be discarded ?


-
It's hard to know how to make sense of this, but or example : the fact that only "tv" is ever shown, does that impliy that the webcam can only ever do limited color range, and there is no point trying to get full 0,255 out of it ? I read somewhere that "pc" implies full range of 0-255, whereas "tv" implies a range of 16-235


-
With regards to color space, is it acceptable to record the webcam as raw (un-encoded), and then later convert to a different color space later down the line ? Which approach to dealing with the color-space yields the least amount of lost color ? My only previous experience with color spaces is in the realm of images - where for example, it makes no sense to convert sRGB to ROMM16 RGB, because you're going to a color space which has wider coverage, and extra colors won't be created out of thin air, you'd want to go once from raw to a color space, and avoid converting between color spaces afterwards. Also, what does "unknown" mean in the color space options ?












Here's the culmination of some research/testing i've done, is there anything correct, or seriously wrong, in the conclusions and assumptions I've made below ?


My understanding of pixel_format is as follows : when you're recording, (even to raw), you specify the pixel format using something like "-pixel_format yuyv422", this is a "packed", not "planar" format, which is produced by the webcam. When you convert from raw to something like mkv using libx264, you can't specify a "packed" pixel format such as "yuyv422", but must instead use an appropriate planar counterpart, such as "yuv422p", which would be specified using "-pix_fmt yuv422p".


I did a raw recording of the webcam (in which I recorded a bright light, in the dark), I didn't set any of the options in the brackets above. I then converted this video using libx264 with the flags "-dst_range 1 -color_range 2" which I saw elsewhere on the internet.


Taking a screenshot of this video using vlc, and putting it through imagemagick identify -verbose, shows that the color range of the screenshot is 0,255, as for the video itself, "MediaInfo" reports "color range:Full", VLC's codec info says "Decoded format : Planar 4:2:2 YUV full scale - is this info worth anything, or is it just meta-data that the video got tagged with ?


At first I was happy about imagemagick's color range reporting, but I am thinking now, the 0, 255 range could be a result of "overshoot" values produced by the camera, which aren't actually supposed to be mapped linearly.


I appreciate that this probably feels like some school-kiddy offloading their homework assignment to avoid doing work, but I hope it can be seen that I've looked into these things prior to putting this post together.


Thanks in advance, if anyone takes the time to answer anything.


-
-
Stuck in installing a voicecloner via Python (module not found)
25 novembre 2023, par WimmahI use Python 3.11.5


As a great Python n00b I enter this forum because I'm stuck with installing a Voice Cloner (for personal use to do a funny trick for X-mas with my family) Its this tool that i'm trying to install : https://github.com/CorentinJ/Real-Time-Voice-Cloning


With a little help of chatGTP I came quite far but for some reason the downloaded datasets cant be found. Instructions of the tool state :


Install intructions form Github
So my tree looks like this :


(base) willem@willems-air Voice cloner % tree
.
├── demo_cli.py
├── demo_toolbox.py
├── encoder_preprocess.py
├── encoder_train.py
├── saved_models
│   └── default
│   ├── encoder.pt
│   ├── synthesizer.pt
│   └── vocoder.pt
├── synthesizer_preprocess_audio.py
├── synthesizer_preprocess_embeds.py
├── synthesizer_train.py
└── vocoder_train.py

3 directories, 11 files



However, when I give the command to execute the demo, I get the message that a needed module cant be found :


(base) willem@willems-air Voice cloner % python demo_cli.py
Traceback (most recent call last):
 File "/Users/willem/Desktop/Voice cloner/demo_cli.py", line 10, in <module>
 from encoder import inference as encoder
ModuleNotFoundError: No module named 'encoder'
</module>


I build a tree that (for me) looks inline with the installation instructions...(And of course i downloaded the modules without any errors)
Here also the first lines of the command demo_cli.py where you also see the path :


import argparse
import os
from pathlib import Path

import librosa
import numpy as np
import soundfile as sf
import torch

from encoder import inference as encoder
from encoder.params_model import model_embedding_size as speaker_embedding_size
from synthesizer.inference import Synthesizer
from utils.argutils import print_args
from utils.default_models import ensure_default_models
from vocoder import inference as vocoder


if __name__ == '__main__':
 parser = argparse.ArgumentParser(
 formatter_class=argparse.ArgumentDefaultsHelpFormatter
 )
 parser.add_argument("-e", "--enc_model_fpath", type=Path,
 default="saved_models/default/encoder.pt",



I think i missed out a quite basic step here, but this far ChatGTP is looping and cant help any more, so I need a human tip i guess ;)


Thx in advance !