1.部署sc ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-redis provisioner: kubernetes.io/no-provisioner volumeBindingMode:
```python # tensorflow里引入一个新的数据类型-张量(tensor),与numpy的ndarray类似,是一个多维数组。和numpy的区别在于:numpy的ndarray只支持CPU计算,而张量支持GPU,可以通过GPU加速,提高速度,同时张量还支持自动微分计算,更适合深度学习
CMakeLists.txt cmake_minimum_required(VERSION 3.25) project(test) set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD 14) set(FFMPEG_DIR /usr/local/ffmp
配置链接库路径 sudo vim /etc/ld.so.conf.d/ffmpeg.conf /usr/local/ffmpeg/lib/ 编写CMakeLists.txt cmake_minimum_required(VERSION 3.25) project(test) set(CMAKE_CX
# -*- coding: utf-8 -*- import os import threading import time import tkinter from tkinter import TOP, LEFT, RIGHT, messagebox, filedialog, DISABLED,
# -*- coding: utf-8 -*- import os import threading import tkinter from tkinter import LEFT, RIGHT, filedialog, messagebox, DISABLED, NORMAL, TOP impor
# -*- coding: utf-8 -*- import os import time import cv2 import numpy from PIL import Image, ImageDraw, ImageFont ascii_char = list("$@B%8&WM#*oahkbdp
class NoModifyMeta(type): def __setattr__(cls, key, value): raise AttributeError(f"Cannot modify class attribute '{key}'") class ConstDict(metaclass=N
import cv2 import subprocess input_video_path = "/home/navy/Desktop/1.mp4" opencv_video_path = "/home/navy/Desktop/2.mp4" new_video_path = "/home/navy
# -*- coding: utf-8 -*- import os import subprocess import threading import time import tkinter from tkinter import TOP, LEFT, RIGHT, messagebox, file
ffmpeg -i 1.mp4 -i 2.mp4 -map 0:a -map 1:v? -c:v copy -shortest output.mp4
ffmpeg -i a.mp4 -i a.mp3 -c:v copy -c:a aac -strict experimental output.mp4
# 提取视频里的声音 ffmpeg -i 1.mp4 -vn output.mp3 # 提取视频里的画面,过滤掉声音 ffmpeg -i 1.mp4 -an output.mp4 # 同时分离视频流和音频流 ffmpeg -i 1.mp4 -vn -c:v copy audio.mp3 -an -c
1.安装OpenResty的依赖开发库 yum install -y pcre-devel openssl-devel gcc --skip-broken 2.安装OpenResty仓库 yum-config-manager --add-repo https://openresty.org/pack
string formatJson(string json) { string result = ""; int level = 0; for (string::size_type index = 0; index < json.size(); index++) { char c = json[in
import re def to_camel_case(x): """转驼峰法命名""" return re.sub('_([a-zA-Z])', lambda m: (m.group(1).upper()), x) def to_upper_camel_case(x): """转大驼峰法命名"""
1.安装 pip3 install -U lesscode_tool 2.创建项目(目前仅支持创建lesscode-py,其他项目请用subcommand实现) 2.1创建lesscode-py项目 lesscodeTool new -d test 2.2创建django项目 lesscodeToo
1.官网地址,去官网下载ffmpeg源码https://ffmpeg.org2.安装ffmpeg基础依赖sudo apt-get update sudo apt-get install build-essential yasm texi2html libvorbis-dev libmp3lame-dev libopus-dev libx264-dev libx265-dev libvpx-dev
#include <opencv2/opencv.hpp> #include <iostream> #include <thread> using namespace cv; using namespace std; void getCameraInfo(int cameraId) { VideoC
var officegen = require('officegen'); var fs = require('fs'); var path = require('path'); var docx = officegen('docx'); docx.on('finalize', function (written) { console.log('Finish to create Word
def data2single_dict(source): stack = [(source, "")] result = {} while stack: obj, parent_name = stack.pop() if isinstance(obj, dict): for k, v in obj
# 不带密码认证的 docker run --name pypi --restart always -p 8080:8080 -d pypiserver/pypiserver -P . -a . # 带密码认证的 docker run --name pypi --restart always -v
def es_mapping2dict(mapping): mapping_dict = dict() if isinstance(mapping, dict): if "properties" in mapping: for k, v in mapping.get("properties").it
def convert_query(query): """ Convert Elasticsearch query to use keyw
1.reshape重置形状 a = tf.random.normal([4,28,28,3]) print("a:",a.shape,a.ndim) # 失去图片的行和列信息,可以理解为每个像素点(pixel) b = tf.reshape(a,[4,28*28,3]) print("b:",b.s
function parse(STR_XPATH) { var xresult = document.evaluate(STR_XPATH, document, null, XPathResult.ANY_TYPE, null); var xnodes = []; var xres; while (
1.查找要清理的文件 git rev-list --objects --all | grep "清理的文件关键字" 2.删除历史记录 git log --pretty=oneline --branches -- 文件或者目录 3.重写所有 commit,将该文件从 Git 历史中完全移除 git f
1.算法原理 y=w*x+b+ε loss=Σ(w*xi+b-yi)2 w'=w-lr*(loss对w求偏导) # 梯度下
1.算法原理 y=w*x+b+ε loss=Σ(w*xi+b-yi)2 w'=w-lr*(loss对w求偏导) # 梯度下降算
Copyright © 2005-2023 51CTO.COM 版权所有 京ICP证060544号