
Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (70)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (9950)
-
FFMPEG multi livestream - recorded stream send to different services like YT and Twitch at different time (on different button clicks )
4 octobre 2022, par GaneshTrying for the last 10 days and still no success, I am creating a python application that will accept the URL and visit that URL using chromium, capture that screen and send that real-time screen recording to different live stream acceptors as youtube live, twitch Twitter, Facebook live or some other sources and many of these could be multiple.


There are two challenges (both challenges depend on a user action like different button clicks) -


- 

- The time of starting the Livestream we know only one Livestream acceptor and other acceptors could be sent via another API at any time or may not be sent on the whole live stream.
- Any of the streams could be stopped at any moment including the first one which started the original live streaming service






To Solve these challenges I am trying the following process (i took mp4 as a source for simplifying)


- 

- create a stream and store it into PIPE.stdout




ffmpeg_Command_get_stream = 'ffmpeg -re -i test.mp4 -f flv pipe:1'
ffmpeg_Command_get_stream=ffmpeg_Command_get_stream.split()
pipe = sp.Popen(ffmpeg_Command_get_stream,
 stdout=sp.PIPE,
 stderr=sp.PIPE,
 bufsize=8000000,
 shell=True,
 universal_newlines=True
 )
out,err = pipe.communicate()



- 

-
and send that stream with the help of FFMPEG to the Livestream acceptor with the click of the youtube Livestream button


ffmpeg_Command_send_stream = ['ffmpeg','-i',pipe.stdout,'-f','flv',RTMPURL_YOUTUBE]






Update Trying to Explain it a little more :


step 1 - I need a real-time stream from the first command, so I used -re in FFMPEG


step 2 - Use above stream as an input for other command and send that as an output as a Livestream to youtube (or twitch/Facebook), But the second step would happen only when the user click on the button "YT LiveStream", Here the tricky thing is there are multiple buttons (YT LiveStream, Twitch LiveStream, Facebook LiveStream) and user can click any time on any of button, also can click on all button one by one.




sorry for bad explaination


what I am doing wrong ? , Is this Possible ? or need to go with another process,


any help would be greatly appreciated


-
libavfilter/dnn : determine dnn output during execute_model instead of set_input_output
25 avril 2019, par Guo, Yejunlibavfilter/dnn : determine dnn output during execute_model instead of set_input_output
Currently, within interface set_input_output, the dims/memory of the tensorflow
dnn model output is determined by executing the model with zero input,
actually, the output dims might vary with different input data for networks
such as object detection models faster-rcnn, ssd and yolo.This patch moves the logic from set_input_output to execute_model which
is suitable for all the cases. Since interface changed, and so dnn_backend_native
also changes.In vf_sr.c, it knows it's srcnn or espcn by executing the model with zero input,
so execute_model has to be called in function config_propsSigned-off-by : Guo, Yejun <yejun.guo@intel.com>
Signed-off-by : Pedro Arthur <bygrandao@gmail.com> -
A ffmpeg comman canwork in cmd but not in Python using subprocess.call() or os.system()
6 juin 2018, par StarryskyI wanna transfer a .mp3 to .wav. This is my command :
ffmpeg -i a.mp3 -ar 16000 -ac 1 -acodec pcm_s16le a.wav
It worked well in cmd
C:\Users\starrysky\Documents\GitHub\bing_pic\html>ffmpeg -i a.mp3 -ar 16000 -ac 1 -acodec pcm_s16le a.wav
ffmpeg version N-86482-gbc40674 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.1.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 66.100 / 55. 66.100
libavcodec 57. 99.100 / 57. 99.100
libavformat 57. 73.100 / 57. 73.100
libavdevice 57. 7.100 / 57. 7.100
libavfilter 6. 92.100 / 6. 92.100
libswscale 4. 7.101 / 4. 7.101
libswresample 2. 8.100 / 2. 8.100
libpostproc 54. 6.100 / 54. 6.100
Input #0, mp3, from 'a.mp3':
Metadata:
encoder : Lavf54.6.100
Duration: 00:00:01.87, start: 0.000000, bitrate: 8 kb/s
Stream #0:0: Audio: mp3, 8000 Hz, mono, s16p, 8 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to 'a.wav':
Metadata:
ISFT : Lavf57.73.100
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 16000 Hz, mono, s16, 256 kb/s
Metadata:
encoder : Lavc57.99.100 pcm_s16le
size= 59kB time=00:00:01.87 bitrate= 256.3kbits/s speed= 187x
video:0kB audio:58kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.130208%but when I moved it into my python program, something strange happened.
>>> C:\Users\starrysky\Documents\GitHub\bing_pic\html\
'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij���
�����������
1 Command 'ffmpeg -i a.mp3 -ar 16000 -ac 1 -acodec pcm_s16le a.wav' returned non-zero exit status 1.
文件错误啊,亲
[WinError 2] 系统找不到指定的文件。: 'a.wav'This is part of my python code :
@bot.register(wife, RECORDING)
def translate_sound(msg):
msg.get_file(save_path='a.mp3')
path = os.path.abspath('.')+'\\'
print(path)
try:
subprocess.check_call('ffmpeg -i a.mp3 -ar 16000 -ac 1 -acodec pcm_s16le a.wav', shell=True)
# ''
except Exception as e:
print(1, e)
wav_to_text('a.wav')
try:
os.remove('a.wav')
except Exception as e:
print(e)# 调用百度语音识别API
def get_token():
URL = 'http://openapi.baidu.com/oauth/2.0/token'
_params = urllib.parse.urlencode({'grant_type': b'client_credentials',
'client_id': b''
'client_secret': b''})
_res = urllib.request.Request(URL, _params.encode())
_response = urllib.request.urlopen(_res)
_data = _response.read()
_data = json.loads(_data)
return _data['access_token']
def wav_to_text(wav_file):
try:
wav_file = open(wav_file, 'rb')
except IOError:
print('文件错误啊,亲')
return
wav_file = wave.open(wav_file)
n_frames = wav_file.getnframes()
print('n_frames ', n_frames)
frame_rate = wav_file.getframerate()
print("frame_rate ", frame_rate)
if n_frames == 1 or frame_rate not in (8000, 16000):
print('不符合格式')
return
audio = wav_file.readframes(n_frames)
seconds = n_frames/frame_rate+1
minute = int(seconds/60 + 1)
for i in range(0, minute):
sub_audio = audio[i*60*frame_rate:(i+1)*60*frame_rate]
base_data = base64.b64encode(sub_audio)
data = {"format": "wav",
"token": get_token(),
"len": len(sub_audio),
"rate": frame_rate,
"speech": base_data.decode(),
"cuid": "B8-AC-6F-2D-7A-94",
"channel": 1}
data = json.dumps(data)
res = urllib.request.Request('http://vop.baidu.com/server_api',
data.encode(),
{'content-type': 'application/json'})
response = urllib.request.urlopen(res)
res_data = json.loads(response.read())
try:
print(res_data['result'][0])
except Exception as e:
print(e)What happened ?