大数据项目总体流程分为以下4个方面:
数据采集,数据存储与管理,数据处理与分析,数据解释与可视化。
文章目录
- 数据源
- 项目要求
- 项目流程
- 1.数据爬取与清洗
- 2.jmeter模拟高并发数据流
- 3.kafka缓冲
- 4.flink实时处理
- 5.mycat+mysql存放数据
- 6.flask+ajax+echarts可视化展示
- 小结
数据源
下载 高速公路ETC入深圳数据,数据量:178396条
https://opendata.sz.gov.cn/data/dataSet/toDataDetails/29200_00403621
项目要求
(1)每秒产生50+条数据,可以采用网络压力测试工具产生多点并发的高速数据流,例如JMeter
(2)利用Kafka对高速数据进行缓存
(3)利用HBase或者MyCat+Mysql对数据进行存储。
(4)如果采用MyCat+Mysql方式存储数据,需要设计业务逻辑对数据进行分片,并对全局数据进行查询和统计
(5)如果采用HBase方式存储数据,需要设计业务逻辑对rowkey进行设计,并对数据进行“key-value”查询。
(6)对两种方式查询或者统计的结果进行可视化展示,要求每分钟一次对结果进行刷新。
项目流程
项目整体流程:
1.数据爬取与清洗
(之后补充,可以直接使用官网提供的一万条数据示例)
2.jmeter模拟高并发数据流
1)官网下载jmeter
2)启动jmeter
在jmeter安装目录下找到bin目录下的jmeter.bat双击启动jmeter
3) 模拟高并发数据流
创建一个多线程组
添加一个请求指令,指定每个线程每次的请求
classname更换为kafkameter的生产者请求
kafkaProducerSampler(将其写入一个meaven项目中):
package co.signal.kafkameter;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import com.google.common.base.Strings;
import org.apache.jmeter.config.Arguments;
import org.apache.jmeter.protocol.java.sampler.AbstractJavaSamplerClient;
import org.apache.jmeter.protocol.java.sampler.JavaSamplerContext;
import org.apache.jmeter.samplers.SampleResult;
import org.apache.jorphan.logging.LoggingManager;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.log.Logger;
/**
* A {@link org.apache.jmeter.samplers.Sampler Sampler} which produces Kafka messages.
*
*
* @see "http://ilkinbalkanay.blogspot.com/2010/03/load-test-whatever-you-want-with-apache.html"
* @see "http://newspaint.wordpress.com/2012/11/28/creating-a-java-sampler-for-jmeter/"
* @see "http://jmeter.512774.n5.nabble.com/Custom-Sampler-Tutorial-td4490189.html"
*/
public class KafkaProducerSampler extends AbstractJavaSamplerClient {
private static final Logger log = LoggingManager.getLoggerForClass();
/**
* Parameter for setting the Kafka brokers; for example, "kafka01:9092,kafka02:9092".
*/
private static final String PARAMETER_KAFKA_BROKERS = "kafka_brokers";
/**
* Parameter for setting the Kafka topic name.
*/
private static final String PARAMETER_KAFKA_TOPIC = "kafka_topic";
/**
* Parameter for setting the Kafka key.
*/
private static final String PARAMETER_KAFKA_KEY = "kafka_key";
/**
* Parameter for setting the Kafka message.
*/
private static final String PARAMETER_KAFKA_MESSAGE = "kafka_message";
/**
* Parameter for setting Kafka's {@code serializer.class} property.
*/
private static final String PARAMETER_KAFKA_MESSAGE_SERIALIZER = "kafka_message_serializer";
/**
* Parameter for setting Kafka's {@code key.serializer.class} property.
*/
private static final String PARAMETER_KAFKA_KEY_SERIALIZER = "kafka_key_serializer";
/**
* Parameter for setting the Kafka ssl keystore (include path information); for example, "server.keystore.jks".
*/
private static final String PARAMETER_KAFKA_SSL_KEYSTORE = "kafka_ssl_keystore";
/**
* Parameter for setting the Kafka ssl keystore password.
*/
private static final String PARAMETER_KAFKA_SSL_KEYSTORE_PASSWORD = "kafka_ssl_keystore_password";
/**
* Parameter for setting the Kafka ssl truststore (include path information); for example, "client.truststore.jks".
*/
private static final String PARAMETER_KAFKA_SSL_TRUSTSTORE = "kafka_ssl_truststore";
/**
* Parameter for setting the Kafka ssl truststore password.
*/
private static final String PARAMETER_KAFKA_SSL_TRUSTSTORE_PASSWORD = "kafka_ssl_truststore_password";
/**
* Parameter for setting the Kafka security protocol; "true" or "false".
*/
private static final String PARAMETER_KAFKA_USE_SSL = "kafka_use_ssl";
/**
* Parameter for setting encryption. It is optional.
*/
private static final String PARAMETER_KAFKA_COMPRESSION_TYPE = "kafka_compression_type";
/**
* Parameter for setting the partition. It is optional.
*/
private static final String PARAMETER_KAFKA_PARTITION = "kafka_partition";
//private Producer<Long, byte[]> producer;
private KafkaProducer<String, String> producer;
@Override
public void setupTest(JavaSamplerContext context) {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, context.getParameter(PARAMETER_KAFKA_BROKERS));
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, context.getParameter(PARAMETER_KAFKA_KEY_SERIALIZER));
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, context.getParameter(PARAMETER_KAFKA_MESSAGE_SERIALIZER));
props.put(ProducerConfig.ACKS_CONFIG, "1");
// check if kafka security protocol is SSL or PLAINTEXT (default)
if(context.getParameter(PARAMETER_KAFKA_USE_SSL).equals("true")){
log.info("Setting up SSL properties...");
props.put("security.protocol", "SSL");
props.put("ssl.keystore.location", context.getParameter(PARAMETER_KAFKA_SSL_KEYSTORE));
props.put("ssl.keystore.password", context.getParameter(PARAMETER_KAFKA_SSL_KEYSTORE_PASSWORD));
props.put("ssl.truststore.location", context.getParameter(PARAMETER_KAFKA_SSL_TRUSTSTORE));
props.put("ssl.truststore.password", context.getParameter(PARAMETER_KAFKA_SSL_TRUSTSTORE_PASSWORD));
}
String compressionType = context.getParameter(PARAMETER_KAFKA_COMPRESSION_TYPE);
if (!Strings.isNullOrEmpty(compressionType)) {
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, compressionType);
}
producer = new KafkaProducer<String, String>(props);
}
@Override
public void teardownTest(JavaSamplerContext context) {
producer.close();
}
@Override
public Arguments getDefaultParameters() {
Arguments defaultParameters = new Arguments();
defaultParameters.addArgument(PARAMETER_KAFKA_BROKERS, "${PARAMETER_KAFKA_BROKERS}");
defaultParameters.addArgument(PARAMETER_KAFKA_TOPIC, "${PARAMETER_KAFKA_TOPIC}");
defaultParameters.addArgument(PARAMETER_KAFKA_KEY, "${PARAMETER_KAFKA_KEY}");
defaultParameters.addArgument(PARAMETER_KAFKA_MESSAGE, "${PARAMETER_KAFKA_MESSAGE}");
defaultParameters.addArgument(PARAMETER_KAFKA_MESSAGE_SERIALIZER, "org.apache.kafka.common.serialization.StringSerializer");
defaultParameters.addArgument(PARAMETER_KAFKA_KEY_SERIALIZER, "org.apache.kafka.common.serialization.StringSerializer");
defaultParameters.addArgument(PARAMETER_KAFKA_SSL_KEYSTORE, "${PARAMETER_KAFKA_SSL_KEYSTORE}");
defaultParameters.addArgument(PARAMETER_KAFKA_SSL_KEYSTORE_PASSWORD, "${PARAMETER_KAFKA_SSL_KEYSTORE_PASSWORD}");
defaultParameters.addArgument(PARAMETER_KAFKA_SSL_TRUSTSTORE, "${PARAMETER_KAFKA_SSL_TRUSTSTORE}");
defaultParameters.addArgument(PARAMETER_KAFKA_SSL_TRUSTSTORE_PASSWORD, "${PARAMETER_KAFKA_SSL_TRUSTSTORE_PASSWORD}");
defaultParameters.addArgument(PARAMETER_KAFKA_USE_SSL, "${PARAMETER_KAFKA_USE_SSL}");
defaultParameters.addArgument(PARAMETER_KAFKA_COMPRESSION_TYPE, null);
defaultParameters.addArgument(PARAMETER_KAFKA_PARTITION, null);
return defaultParameters;
}
@Override
public SampleResult runTest(JavaSamplerContext context) {
SampleResult result = newSampleResult();
String topic = context.getParameter(PARAMETER_KAFKA_TOPIC);
String key = context.getParameter(PARAMETER_KAFKA_KEY);
String message = context.getParameter(PARAMETER_KAFKA_MESSAGE);
sampleResultStart(result, message);
final ProducerRecord<String, String> producerRecord;
String partitionString = context.getParameter(PARAMETER_KAFKA_PARTITION);
if (Strings.isNullOrEmpty(partitionString)) {
producerRecord = new ProducerRecord<String, String>(topic, key, message);
} else {
final int partitionNumber = Integer.parseInt(partitionString);
producerRecord = new ProducerRecord<String, String>(topic, partitionNumber, key, message);
}
try {
producer.send(producerRecord);
sampleResultSuccess(result, null);
} catch (Exception e) {
sampleResultFailed(result, "500", e);
}
return result;
}
/**
* Use UTF-8 for encoding of strings
*/
private static final String ENCODING = "UTF-8";
/**
* Factory for creating new {@link SampleResult}s.
*/
private SampleResult newSampleResult() {
SampleResult result = new SampleResult();
result.setDataEncoding(ENCODING);
result.setDataType(SampleResult.TEXT);
return result;
}
/**
* Start the sample request and set the {@code samplerData} to {@code data}.
*
* @param result
* the sample result to update
* @param data
* the request to set as {@code samplerData}
*/
private void sampleResultStart(SampleResult result, String data) {
result.setSamplerData(data);
result.sampleStart();
}
/**
* Mark the sample result as {@code end}ed and {@code successful} with an "OK" {@code responseCode},
* and if the response is not {@code null} then set the {@code responseData} to {@code response},
* otherwise it is marked as not requiring a response.
*
* @param result sample result to change
* @param response the successful result message, may be null.
*/
private void sampleResultSuccess(SampleResult result, /* @Nullable */ String response) {
result.sampleEnd();
result.setSuccessful(true);
result.setResponseCodeOK();
if (response != null) {
result.setResponseData(response, ENCODING);
}
else {
result.setResponseData("No response required", ENCODING);
}
}
/**
* Mark the sample result as @{code end}ed and not {@code successful}, and set the
* {@code responseCode} to {@code reason}.
*
* @param result the sample result to change
* @param reason the failure reason
*/
private void sampleResultFailed(SampleResult result, String reason) {
result.sampleEnd();
result.setSuccessful(false);
result.setResponseCode(reason);
}
/**
* Mark the sample result as @{code end}ed and not {@code successful}, set the
* {@code responseCode} to {@code reason}, and set {@code responseData} to the stack trace.
*
* @param result the sample result to change
* @param exception the failure exception
*/
private void sampleResultFailed(SampleResult result, String reason, Exception exception) {
sampleResultFailed(result, reason);
result.setResponseMessage("Exception: " + exception);
result.setResponseData(getStackTrace(exception), ENCODING);
}
/**
* Return the stack trace as a string.
*
* @param exception the exception containing the stack trace
* @return the stack trace
*/
private String getStackTrace(Exception exception) {
StringWriter stringWriter = new StringWriter();
exception.printStackTrace(new PrintWriter(stringWriter));
return stringWriter.toString();
}
}
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>kafkameter</groupId>
<artifactId>kafkameter</artifactId>
<version>0.2.0</version>
<name>kafkameter</name>
<description>Kafka plugin for JMeter</description>
<url>http://signal.co</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.build.outputEncoding>UTF-8</project.build.outputEncoding>
<jdk.version>1.6</jdk.version>
</properties>
<dependencies>
<!-- Base Dependencies -->
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>14.0</version>
</dependency>
<dependency>
<groupId>com.google.code.findbugs</groupId>
<artifactId>jsr305</artifactId>
<version>1.3.9</version>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.2.4</version>
</dependency>
<!-- JMeter Dependencies -->
<dependency>
<groupId>org.apache.jmeter</groupId>
<artifactId>ApacheJMeter_core</artifactId>
<version>2.11</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<!-- XXX 2.5.1 included transitively but not available in Maven -->
<groupId>com.fifesoft</groupId>
<artifactId>rsyntaxtextarea</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.jmeter</groupId>
<artifactId>ApacheJMeter_java</artifactId>
<version>2.11</version>
<scope>provided</scope>
<exclusions>
<exclusion>
<!-- XXX 2.5.1 included transitively but not available in Maven -->
<groupId>com.fifesoft</groupId>
<artifactId>rsyntaxtextarea</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Kafka Dependencies -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.1</version>
<exclusions>
<!-- XXX Transitively includes log4j 1.2.15 which has bad metadata
http://stackoverflow.com/a/9047963 -->
<exclusion>
<groupId>com.sun.jmx</groupId>
<artifactId>jmxri</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jdmk</groupId>
<artifactId>jmxtools</artifactId>
</exclusion>
<exclusion>
<groupId>javax.jms</groupId>
<artifactId>jms</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/java</directory>
<includes>
<include>**/*.properties</include>
</includes>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3</version>
<configuration>
<source>${jdk.version}</source>
<target>${jdk.version}</target>
<optimize>true</optimize>
<showDeprecation>true</showDeprecation>
<showWarnings>true</showWarnings>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.2</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
如果不想自己写,也可以在这里下载:
之后补充
补充request信息
使用csv文件作为数据源
设置csv文件传送数据,如果第一行就是数据,则需要补充variable name(为csv每列数据添加列名)如果第一行是每列变量名则可以忽略添加variable name
3.kafka缓冲
在大数据集群上部署kafka集群即可,在启动jmeter之前需要先保证:
1.kafka已经启动
2.已经创建了first主题
4.flink实时处理
将kafka上采集到数据后转存至mysql数据库中
项目结构:
Load对象:
public class Load {
private int XH;
private String CX,SFZRKMC,SFZCKMC,CP;
private String CKSJ,RKSJ;
public int getXH() {
return XH;
}
public void setXH(int XH) {
this.XH = XH;
}
public String getCKSJ(){
return CKSJ;
}
public void setCKSJ(String CKSJ){
this.CKSJ=CKSJ;
}
public String getCX() {
return CX;
}
public void setCX(String CX) {
this.CX = CX;
}
public String getSFZRKMC() {
return SFZRKMC;
}
public void setSFZRKMC(String SFZRKMC) {
this.SFZRKMC = SFZRKMC;
}
public String getCP() {
return CP;
}
public void setCP(String CP) {
this.CP = CP;
}
public String getSFZCKMC() {
return SFZCKMC;
}
public void setSFZCKMC(String SFZCKMC) {
this.SFZCKMC = SFZCKMC;
}
public String getRKSJ(){
return RKSJ;
}
public void setRKSJ(String RKSJ){
this.RKSJ=RKSJ;
}
}
flink处理:
连接kafka配置信息:
Properties prop = new Properties();
prop.put("bootstrap.servers", "hadoop102:9092");
prop.put("zookeeper.connect", "hadoop102:2181");
prop.put("group.id", "first");
//添加key和value的序列化器
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
prop.put("auto.offset.reset", "latest");
DataStream<String> inputStream = env.addSource(new FlinkKafkaConsumer<String>("first", new SimpleStringSchema(), prop));
使用flatmap算子来自处理kafka 9092端口数据(flink source),即将每一条数据转为一个load对象:
DataStream<Load> dataStream =dataStreamSource.flatMap(new FlatMapFunction<String, Load>() {
@Override
public void flatMap(String line, Collector<Load> out) throws Exception {
String[] tokens = line.split(",");
Load Load=new Load();
if(tokens.length==7){
Load.setXH(Integer.parseInt(tokens[0]));
Load.setCKSJ(tokens[1]);
Load.setCX(tokens[2]);
Load.setRKSJ(tokens[4]);
Load.setCP(tokens[6]);
Load.setSFZRKMC(tokens[3]);
Load.setSFZCKMC(tokens[5]);
}
out.collect(Load);
}
});
收集1s的数据(对象)并将数据发送到sink端口:
dataStream.timeWindowAll(Time.seconds(1L)).apply(new AllWindowFunction<Load, List<Load>, TimeWindow>() {
@Override
public void apply(TimeWindow timeWindow, Iterable<Load> iterable, Collector<List<Load>>out) throws Exception {
List<Load> Loads = Lists.newArrayList(iterable);
if(Loads.size() > 0) {
System.out.println("1秒的总共收到的条数:" + Loads.size());
out.collect(Loads);
}
}
})
//sink 到数据库
.addSink(new MysqlConnect());
获取MyCat TESTDB数据库的连接对象:
public class Mysql {
private static DruidDataSource dataSource;
public static Connection getConnection() throws Exception {
dataSource = new DruidDataSource();
dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver");
dataSource.setUrl("jdbc:mysql://hadoop:8066/TESTDB?useUnicode=true&characterEncoding=utf8");
dataSource.setUsername("root");
dataSource.setPassword("123456");
//设置初始化连接数,最大连接数,最小闲置数
dataSource.setInitialSize(10);
dataSource.setMaxActive(75);
dataSource.setMinIdle(20);
return dataSource.getConnection();
}
}
设置插入对象的数据,并执行SQL插入(flink sink):
public class MysqlConnect extends RichSinkFunction<List<Load>> {
private PreparedStatement ps;
private Connection connection;
private static DruidDataSource dataSource;
@Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
//获取数据库连接,准备写入数据库
connection = Mysql.getConnection();
String sql = "insert into expressway(XH,CKSJ,CX,SFZRKMC,RKSJ,SFZCKMC,CP) values (?,?,?,?,?,?,?); ";
ps = connection.prepareStatement(sql);
}
@Override
public void close() throws Exception {
super.close();
//关闭并释放资源
if(connection != null) {
connection.close();
}
if(ps != null) {
ps.close();
}
}
//XH,CX,SFZRKMC,SFZCKMC,CP,从flink处理好的数据拿取
@Override
public void invoke(List<Load> Loads, Context context) throws Exception {
for(Load Load : Loads) {
ps.setInt(1, Load.getXH());
ps.setString(2, Load.getCKSJ());
ps.setString(3, Load.getCX());
ps.setString(4, Load.getSFZRKMC());
ps.setString(5,Load.getRKSJ());
ps.setString(6, Load.getSFZCKMC());
ps.setString(7, Load.getCP());
ps.executeUpdate();
}
}
}
5.mycat+mysql存放数据
用于存放数据,通过mycat作为mysql中间件进行全局查询和插入,mysql作为实际的存储
如果遇到了mycat的问题=》点下这里 MyCat中以XH%3进行分片(XH为序号)
schema.xml
rule.xml
6.flask+ajax+echarts可视化展示
可视化系统项目结构:
1.static目录中放入json格式的数据文件,和层叠样式表
2.Templates目录下放置可视化页面index.html
3.car_city.py程序用于爬取车牌对应城市信息,爬取信息存储到city.json文件中
4.city_jinwei.py程序用于爬取各城市对应的经纬度,爬取的信息存储到jinwei.text文件中
5.product.py程序用于从数据库中查询数据并将查询结果转换为echarts图表数据格式后存储到data.json文件中
6.server.py程序用于对外提供服务
Web前端核心代码:
//每秒调用一次loadXMLDoc
setInterval(loadXMLDoc,1000)
//ajax请求json文件数据
function loadXMLDoc() {
var xmlhttp;
if (window.XMLHttpRequest)
{
// IE7+, Firefox, Chrome, Opera, Safari 浏览器执行代码
xmlhttp=new XMLHttpRequest();
}
else
{
// IE6, IE5 浏览器执行代码
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
var myArr = JSON.parse(this.responseText);
myFunction(myArr)
}
}
xmlhttp.open("GET","../static/data.json",true);
xmlhttp.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
xmlhttp.send();
}
//echarts图表实例:
var option5= {
title: {
text: '全国城市车辆出入深圳数量分布',
subtext:'当前数据条数:'+result[7],
textStyle: {
color: 'blue',fontSize:34
},left:'center',subtextStyle:{
color: '#696969',fontSize:25,fontFamily:'bold'
},sublink:'https://opendata.sz.gov.cn/data/dataSet/toDataDetails/29200_00403621'
},
legend: {
data: ['全国各地车辆数'], //与series的name属性对应
orient: 'vertical',
y: 'top',
x: 'right',
textStyle: {
color: 'LightSteelBlue',fontSize:25
}
},
tooltip: {
trigger: 'item',
formatter: function(params) {
return params.name + ' : ' + params.value[2];
}
,textStyle:{fontSize:30}
},
visualMap: {
min: 0,
max: 2000,
calculable: true,
inRange: {
color: ['#54FF9F','Blue', 'yellow', 'red']
},
textStyle: {
color: 'Cyan',fontSize:25
}
},
geo: {
map: 'china',
roam: true, //开启鼠标缩放和漫游
zoom: 1, //地图缩放级别
selectedMode: false, //选中模式:single | multiple
left: 0,
right: 0,
top: 0,
bottom: 0,
// layoutCenter: ['50%', '50%'], //设置后left/right/top/bottom等属性无效
layoutSize: '120%',
label: {
emphasis: {
show: false
}
},
itemStyle: {
normal: {
areaColor: 'black',
borderWidth: 1.1,
borderColor: '#43d0d6'
},
emphasis: {
areaColor: '#069'
}
}
},
series: [{
name: '全国各地车辆数',
type: 'scatter',
coordinateSystem: 'geo',
symbolSize: 12,
label: {
normal: {
show: false
},
emphasis: {
show: false
}
},
data: convertScatterData(scatterVal)
}]
};
Web服务端核心代码分析:
from flask import Flask,render_template
app = Flask(__name__) # 创建Flask实例
@app.route("/index",methods=['GET'])
def rount():
return render_template('index.html')
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000)
设置了一个路由’/index’,并设置 host为0.0.0.0,使得其他客户端可以 访问到该网站,port端口为5000。
正确运行时:
web页面1:
web页面2是我们主项目,web页面1是我无聊写的
web页面2:
小结
缺少对业务的理解,有些图表只是为了可视化而可视化,没有考虑到它真正的用途。从实际出发,这样才更有利于在面对未来正式的业务需求时能够从容面对。未来会尝试做一些企业项目来提升自己的业务能力。