nvarchar

nVarchar and SqlParameter

我的梦境 提交于 2019-12-01 11:03:15
I'm developing an application which must support several languages. To solve the special characters problem I'm using NVarhcar for my text fields. So my SQL query for a text field is insert into tbl_text(text)values(N'Chci tančit v oblasti') My problem is to put it in SqlCommand, wich is "insert into tbl_text(text)values(N@text)" . It saves "N@text" in the DB table, sure. Do you guys know someway to do it? I'm using C# and SQL 2008. Sorry if it was hard to understand my question. My English is poor =/ Add(string, object) has been deprecated for this reason (from Pablo Castro of the SQL Server

Convert nvarchar to bigint in Sql server 2008

≡放荡痞女 提交于 2019-12-01 02:54:47
I want insert all rows of a table into another table, and I also want convert a nvarchar field into bigint , but when I use convert(bigint, col1) SQL Server shows an error: Error converting data type nvarchar to bigint How can I fix this problem? You could try to use ISNUMERIC to determine those rows that are indeed numeric: UPDATE dbo.YourTable SET BigIntColumn = CAST(NVarcharColumn AS BIGINT) WHERE ISNUMERIC(NVarcharColumn) = 1 That would convert those rows that can be converted - the others need to be dealt with manually. You should convert bigint to nvarchar not vice versa cast(Other

Nvarchar or varchar what is better use multiply of 2 or rounded full numbers?

老子叫甜甜 提交于 2019-11-30 23:09:22
问题 My question is what is better to use in generating columns in SQL. Should the size of nvarchar (varchar) be multiply of 2 (32, 64, 128) or it's doesn't matter and we can use fully numbers example '100', '50' ? Thank You very much for answers with reasons Greeting's to all 回答1: Doesn't make any difference. Use the size appropiate for your data. For instance SQL Server, if you look at the Anatomy of a Record you'll see that your size translates into record offsets that are dependent on the

Convert NVARCHAR to DATETIME in SQL Server 2008

落花浮王杯 提交于 2019-11-30 21:24:42
In my table LoginDate 2013-08-29 13:55:48 The loginDate column's datatype is nvarchar(150) I want to convert the logindate column into date time format using SQL command Expected result. LoginDate 29-08-2013 13:55:48 DECLARE @chr nvarchar(50) = (SELECT CONVERT(nvarchar(50), GETDATE(), 103)) SELECT @chr chars, CONVERT(date, @chr, 103) date_again SELECT CONVERT(NVARCHAR, LoginDate, 105)+' '+CONVERT(NVARCHAR, LoginDate, 108) AS LoginDate FROM YourTable Output ------------------- 29-08-2013 13:55:48 alter table your_table alter column LoginDate datetime; SQLFiddle demo As your data is nvarchar

What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

﹥>﹥吖頭↗ 提交于 2019-11-30 07:59:49
I have a database that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read this question and it gave me some ideas but I am particularly interested in clarification on whether the

Convert NVARCHAR to DATETIME in SQL Server 2008

人盡茶涼 提交于 2019-11-30 05:21:42
问题 In my table LoginDate 2013-08-29 13:55:48 The loginDate column's datatype is nvarchar(150) I want to convert the logindate column into date time format using SQL command Expected result. LoginDate 29-08-2013 13:55:48 回答1: DECLARE @chr nvarchar(50) = (SELECT CONVERT(nvarchar(50), GETDATE(), 103)) SELECT @chr chars, CONVERT(date, @chr, 103) date_again 回答2: SELECT CONVERT(NVARCHAR, LoginDate, 105)+' '+CONVERT(NVARCHAR, LoginDate, 108) AS LoginDate FROM YourTable Output ------------------- 29-08

Special characters displaying incorrectly after BULK INSERT

本小妞迷上赌 提交于 2019-11-29 11:00:29
问题 I'm using BULK INSERT to import a CSV file. One of the columns in the CSV file contains some values that contain fractions (e.g. 1m½f ). I don't need to do any mathematical operations on the fractions, as the values will just be used for display purposes, so I have set the column as nvarchar . The BULK INSERT works but when I view the records within SQL the fraction has been replaced with a cent symbol ( ¢ ) so the displayed text is 1m¢f . I'm interested to understand why this is happening

What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

旧街凉风 提交于 2019-11-29 10:55:06
问题 I have a database that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read

Linq to SQL nvarchar problem

梦想的初衷 提交于 2019-11-29 09:52:07
I have discovered a huge performance problem in Linq to SQL. When selecting from a table using strings, the parameters passed to sql server are always nvarchar, even when the sql table is a varchar. This results in table scans instead of seeks, a massive performance issue. var q = ( from a in tbl where a.index == "TEST" select a) var qa = q.ToArray(); The parameter is passed through as a nvarchar, which results in the entire index being converted from varchar to nvarchar before being used. If the parameter is a varchar it's a very fast seek. Is there any way to override or change this? Thanks

Determine varchar content in nvarchar columns

╄→尐↘猪︶ㄣ 提交于 2019-11-28 21:52:26
I have a bunch of NVARCHAR columns which I suspect contain perfectly storable data in VARCHAR columns. However I can't just go and change the columns' type into VARCHAR and hope for the best, I need to do some sort of check. I want to do the conversion because the data is static (it won't change in the future) and the columns are indexed and would benefit from a smaller (varchar) index compared to the actual (nvarchar) index. If I simply say ALTER TABLE TableName ALTER COLUMN columnName VARCHAR(200) then I won't get an error or a warning. Unicode data will be truncated/lost. How do I check?